The last invention
Feb. 21st, 2020 11:12 am![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Can't wait.
I wonder how it feels to be someone's dog.
The arrival of superintelligence will clearly deal a heavy blow to anthropocentric worldviews. Much more important than its philosophical implications, however, would be its practical effects. Creating superintelligence may be the last invention that humans will ever need to make, since superintelligences could themselves take care of further scientific and technological development. They would do so more effectively than humans. Biological humanity would no longer be the smartest life form on the block.
--- Nick Bostrom, 2003.
https://nickbostrom.com/views/transhumanist.pdf
I wonder how it feels to be someone's dog.
no subject
Date: 2020-02-21 08:19 pm (UTC)no subject
Date: 2020-02-21 09:00 pm (UTC)But can you imagine a society where the vast majority of children are born severely autistic?
no subject
Date: 2020-02-21 09:12 pm (UTC)Do I want to be a pet dog of someone who is THAT helpless despite all their wealth and influence? I'd like someone way more skillful, that's for sure...
no subject
Date: 2020-02-21 09:20 pm (UTC)no subject
Date: 2020-02-21 10:58 pm (UTC)Pragmatically speaking, why on Earth would I need an "owner", if that "owner" can't reverse my aging, fix my future cancers, and postpone my mortality indefinitely? What would such an "owner" be able to give me, which I don't already have? Nothing seems to come to mind...
no subject
Date: 2020-02-21 11:08 pm (UTC)Only "more effectively", should be "way more effectively, qualitatively more effectively, incomparably more effectively".)
no subject
Date: 2020-02-22 12:25 am (UTC)First, I suggest we consider life something other than the absence of death, i.e. immortality. On the practical level, every child knows the answer, because a loving parent or a caregiver doesn't have to be an omniscient and omnipotent being, provided they give you enough care. On the theoretical level, a new life implies a new potential and new possibilities, which are not known apriori. Therefore, it's a self-contradiction to say that there's a last invention that enables creation of a new mode of life.
Does this line of reasoning make sense to you?
no subject
Date: 2020-02-22 12:54 am (UTC)***
But yes, I can imagine that I'd like an extra parent, who is loving and not necessarily omnipotent, but fairly powerful. And yes, I can even imagine an interesting pragmatic dimension of that. E.g. if such a parent is not omnipotent, but still can guide me through interesting breakthroughs, or care about me when I am in trouble.
***
What's I can't imagine is what Bill Gates or his family and their supposed capabilities might be good for in this sense. I mean, other than the fact that they have money and some experience in putting functioning organizations together. In this sense, yes, if I have an interesting project which is stuck and needs financial/organizational help, then a competent businessperson with experience and resources might help to move it forward (and it so happens that I actually do have a project which can benefit from something like this). But any normal competent businessperson could do something like that, it does not have to be a superrich superfamous person.
And this does not seem to have anything to do with superintelligence. In fact, the presence of true superintelligence would probably make this project unnecessary (the main value of this particular project might be that it can facilitate a faster path towards actual superintelligence). The presence of true superintelligence would probably make any traditionally human project very optional in any case...
***
And then... a Bill Gates or a Jeff Bezos... while they are important figures in our social trajectory, it does not feel like they have much of superintelligence (actually, they are an integral part of our modern society, and to the extent that the society has some traces of future superintelligence, and to the extent that society does take care of us to some extent, in this sense they are a bit associated with that). But I am not sure it would be good or fun to be closely associated with such people, or I am not sure whether they have any meaningful extra capabilities (in addition to what a normal competent businessperson mentioned above would have).
Forget immortality, anti-aging, and healthcare. And forget a bit of normal business or research financing and a bit of help with organizing. Let's set these dimensions aside.
After we set these dimensions aside, can you name a single thing Bill Gates or his family could do for any of us?
no subject
Date: 2020-02-22 01:35 am (UTC)Over thousands of years, humans went through a number of such "retiring inventions" that created new modes of existence. People often talk about the invention of agriculture, which over a long period of time made nomadic tribes extinct and involved violent clashes of civilizations, as recent as the Genghis Khan invasion into Europe and China, and the European invasion into the North America. Similarly, over the last three hundred years the invention of the industry has destroyed the agricultural way of life in most of the world, partially, through two world wars. Arguably, the invention of superintelligence is a part of this industrial revolution, i.e. its information stage.
Furthermore, many people conflate the presumed emergence of superintelligence with the mastery of biology and immortality, which shows an implicit bias toward computing as the ultimate invention. And if we take seriously claims about computing as the basis for superintelligence, we've already made the "last invention" some seventy years. Based on historical records, people thought about mechanics and then chemistry as last inventions too. But when we consider the real game changers in terms of expansion of human life in time and scope, we find antibiotics and fertilizers, which have nothing to do with computing.
Can we imagine that computing is going to play a major role in the next human transition? Probably, yes. Will it be the last invention? Probably, not, because you fundamentally can't reduce life to computation.
no subject
Date: 2020-02-22 03:02 am (UTC)***
I think the most likely path (among a variety of paths people considered so far) remains a path via creation of a new kind of silicon-based life. The standard pathway which seems to me to be the easiest and the likeliest is the main pathway outlined in 1993 essay by Vernor Vinge, "The Coming Technological Singularity: How to Survive in the Post-Human Era". Roughly speaking, one starts with the task of creating an artificial software engineer, and one proceeds towards having a situation where further software engineering efforts can improve the capabilities of that artificial software engineer. With some luck, one would be able to get a superpowerful artificial software engineer along this path, by letting better and better artificial software engineers to work on creation of better and better artificial software engineers.
If this path is successful, it seems that it would naturally spill into other scientific and technological human endeavors, such as theoretical research in math and physics, design and operation of experimental equipment and experimental science, etc.
Obviously, there is a large body of discussion where people are trying to study all this more closely: whether this is feasible at all or not, if this is feasible what are the benefits and dangers, and if so what are our chances to navigate this safely and methods to do so, and so on. For many of those people who think this is feasible, the mind-body problem and the question of what kind of entities have a first-person experience and what kind of experience that might be is important (and the progress of our understanding of the mind-body problem is next to non-existent).
This is a relatively large field of study these days (20 years ago it was a very small field).
***
But setting aside all this, and not trying to reproduce thousands of pages of discourse arguing all sides of this subject, and keeping an open mind about whether this is a feasible direction or not, I think one can still say the following.
If this program of creating a superpowerful artificial software engineer succeeds, it is likely that classical human invention in that particular area will become negligibly small (we see these trends already in more specialized fields, such as certain games - chess, Go, etc). If this program spills into areas of more general thought (science, engineering, etc), the same outcome is likely there (at least for non-augmented humans).
***
Realistically speaking, if all this succeeds without leading to a collapse, there will probably be hybrid minds, and the most interesting inventions will be produced by them.
Even in such simple example as chess, where a computer program is now overwhelmingly stronger than the best human, still the strongest entity is a human-computer pair.
So while there might be not too much room for interesting new inventions by unaugmented humans in this scenario, I don't think it would mostly be inventions by purely silicon entities either.
no subject
Date: 2020-02-22 06:12 am (UTC)It's been invented decades ago. One of her first names is the complier ;)
no subject
Date: 2020-02-22 06:37 am (UTC)But when one can "hire a computer system for a current junior software engineering position at an average company", that will be the threshold for this particular path. The essence of those positions also have not changed for many, many decades. (Basically, when artificial programming systems reach the level of competence in their field which today's self-driving systems possess in theirs, that will be a transition point, and that will be a major revolution (if we can reach that level, that is). From that point the path will be open, for better or for worse (equivalent transformations cannot lead to unlimited self-improvement, for obvious reasons, which is why compilers and speed optimizers are excluded).)
***
Anyway, I found it quite enlightening to re-read Vinge's essay (following a link from his Wikipedia page); as context shifts with time it does read differently each time :-) So, I certainly benefited from this conversation already, because it prompted me to re-read Vinge's essay (and also to meditate on what it was like for him to compose that essay in 1993) :-) And it particular, his emphasis on the hybrid alternatives looks more and more interesting now, and puts the whole "AI safety debate" in a somewhat different light :-) So, anyway, I did benefit from this conversation, it had certainly helped my thought process :-)
I hope you also don't find the overall experience quite empty :-)
no subject
Date: 2020-02-22 06:56 am (UTC)What puzzles me a bit is that people consistently assume that the growing computational power will somehow translate into biological immortality. This smells like the good old fear of death, rather than rational thinking about Singularity, etc. And it gives me hope for the humanity in general because this fear is a consistent theme over the ages.
no subject
Date: 2020-02-22 07:16 am (UTC)Yes... the problem is, the engineers are not getting any help from those supercomputing systems.
Instead, after having considerable progress in terms of the amount of help engineers were getting from the computer systems they were using, this mandatory "going to the cloud" is associated with considerable regress in convenience and engineering productivity. It almost feels that the progress of the first decade of this century achieved in this sense (how much the computer system helps a software engineer) was erased during the second decade, because of how inconvenient those cloud systems tend to be, and how much people are forced to use them... So instead of programming becoming easier, it was getting more difficult again, mostly not for fundamental reasons, but for reasons of various social pathologies (both cloud-related, and of other kinds too).
I should probably re-read Bill Joy...
no subject
Date: 2020-02-22 07:24 am (UTC)It's a typical business problem associated with early stages of a major technology transition. As usual, the new solution succeeds _despite_ all its deficiencies and inefficiencies. It's a sign of strength, rather than weakness.
no subject
Date: 2020-02-22 01:30 am (UTC):-) Like... a superrich lover could buy the best, most prominent and beautiful skyscraper in town, kick out the insurance company from it, and convert it to a 24-hour palace dedicated to electronic music, psychedelic visuals, and other decadent luxuries... and do it all for me :-)
:-) But that's different :-) Or, at least, I think that's different :-) What Nick Bostrom says, we can get an equivalent of this and more with true superintelligence, without disrupting the actual fabric of reality for everyone (if we are lucky) (a disruption which would be needed to actually implement this beautiful dream in the actual city I live in today; I supposed a Bill Gates could do something like this, if he wanted to ;-) ) :-)
no subject
Date: 2020-02-22 01:41 am (UTC)no subject
Date: 2020-02-21 10:45 pm (UTC)no subject
Date: 2020-02-22 12:28 am (UTC)