timelets: (Default)
[personal profile] timelets
Can't wait.
The arrival of superintelligence will clearly deal a heavy blow to anthropocentric worldviews. Much more important than its philosophical implications, however, would be its practical effects. Creating superintelligence may be the last invention that humans will ever need to make, since superintelligences could themselves take care of further scientific and technological development. They would do so more effectively than humans. Biological humanity would no longer be the smartest life form on the block.

--- Nick Bostrom, 2003.

https://nickbostrom.com/views/transhumanist.pdf


I wonder how it feels to be someone's dog.

Date: 2020-02-21 08:19 pm (UTC)
dmm: (Default)
From: [personal profile] dmm
This is the main question: when this would happen, and what it'll be like. (And what would be the particular path leading to that, out of various possibilities.)

Date: 2020-02-21 09:12 pm (UTC)
dmm: (Default)
From: [personal profile] dmm
Mmmm... a superintelligent being who is helpless against its own aging and cancer?

Do I want to be a pet dog of someone who is THAT helpless despite all their wealth and influence? I'd like someone way more skillful, that's for sure...

Date: 2020-02-21 10:58 pm (UTC)
dmm: (Default)
From: [personal profile] dmm
I think superintellegence implies superior mastery over the forces of nature. It implies the capabilities of a superengineer, a superresearcher, etc. And so, it should imply a drastic progress in biomedical science and medical practice among other things. (Of course, people can define the same notion differently, but I am quite certain that Nick Bostrom means it this way.)

Pragmatically speaking, why on Earth would I need an "owner", if that "owner" can't reverse my aging, fix my future cancers, and postpone my mortality indefinitely? What would such an "owner" be able to give me, which I don't already have? Nothing seems to come to mind...

Date: 2020-02-21 11:08 pm (UTC)
dmm: (Default)
From: [personal profile] dmm
(Actually, even your quotation in this post already says that: "Much more important than its philosophical implications, however, would be its practical effects. Creating superintelligence may be the last invention that humans will ever need to make, since superintelligences could themselves take care of further scientific and technological development. They would do so more effectively than humans."

Only "more effectively", should be "way more effectively, qualitatively more effectively, incomparably more effectively".)

Date: 2020-02-22 12:54 am (UTC)
dmm: (Default)
From: [personal profile] dmm
"A last invention" here means the ability to "retire" and leave further inventions to our superintelligent "children". So, no self-contradiction in this sense: on the contrary, this is how the faster spiral of inventions and faster spiral of evolution starts. Old people do retire all the time. (Of course, in reality one probably would not want to "retire", but would want to be upgraded, to become one of superintelligences and to participate in this new game, but that's another line of thought.)

***

But yes, I can imagine that I'd like an extra parent, who is loving and not necessarily omnipotent, but fairly powerful. And yes, I can even imagine an interesting pragmatic dimension of that. E.g. if such a parent is not omnipotent, but still can guide me through interesting breakthroughs, or care about me when I am in trouble.

***

What's I can't imagine is what Bill Gates or his family and their supposed capabilities might be good for in this sense. I mean, other than the fact that they have money and some experience in putting functioning organizations together. In this sense, yes, if I have an interesting project which is stuck and needs financial/organizational help, then a competent businessperson with experience and resources might help to move it forward (and it so happens that I actually do have a project which can benefit from something like this). But any normal competent businessperson could do something like that, it does not have to be a superrich superfamous person.

And this does not seem to have anything to do with superintelligence. In fact, the presence of true superintelligence would probably make this project unnecessary (the main value of this particular project might be that it can facilitate a faster path towards actual superintelligence). The presence of true superintelligence would probably make any traditionally human project very optional in any case...

***

And then... a Bill Gates or a Jeff Bezos... while they are important figures in our social trajectory, it does not feel like they have much of superintelligence (actually, they are an integral part of our modern society, and to the extent that the society has some traces of future superintelligence, and to the extent that society does take care of us to some extent, in this sense they are a bit associated with that). But I am not sure it would be good or fun to be closely associated with such people, or I am not sure whether they have any meaningful extra capabilities (in addition to what a normal competent businessperson mentioned above would have).

Forget immortality, anti-aging, and healthcare. And forget a bit of normal business or research financing and a bit of help with organizing. Let's set these dimensions aside.

After we set these dimensions aside, can you name a single thing Bill Gates or his family could do for any of us?

Date: 2020-02-22 03:02 am (UTC)
dmm: (Default)
From: [personal profile] dmm
Well, we certainly have not invented superintelligence yet. Whether it is feasible at all remains to be seen; the answer to that question is not obvious at all.

***

I think the most likely path (among a variety of paths people considered so far) remains a path via creation of a new kind of silicon-based life. The standard pathway which seems to me to be the easiest and the likeliest is the main pathway outlined in 1993 essay by Vernor Vinge, "The Coming Technological Singularity: How to Survive in the Post-Human Era". Roughly speaking, one starts with the task of creating an artificial software engineer, and one proceeds towards having a situation where further software engineering efforts can improve the capabilities of that artificial software engineer. With some luck, one would be able to get a superpowerful artificial software engineer along this path, by letting better and better artificial software engineers to work on creation of better and better artificial software engineers.

If this path is successful, it seems that it would naturally spill into other scientific and technological human endeavors, such as theoretical research in math and physics, design and operation of experimental equipment and experimental science, etc.

Obviously, there is a large body of discussion where people are trying to study all this more closely: whether this is feasible at all or not, if this is feasible what are the benefits and dangers, and if so what are our chances to navigate this safely and methods to do so, and so on. For many of those people who think this is feasible, the mind-body problem and the question of what kind of entities have a first-person experience and what kind of experience that might be is important (and the progress of our understanding of the mind-body problem is next to non-existent).

This is a relatively large field of study these days (20 years ago it was a very small field).

***

But setting aside all this, and not trying to reproduce thousands of pages of discourse arguing all sides of this subject, and keeping an open mind about whether this is a feasible direction or not, I think one can still say the following.

If this program of creating a superpowerful artificial software engineer succeeds, it is likely that classical human invention in that particular area will become negligibly small (we see these trends already in more specialized fields, such as certain games - chess, Go, etc). If this program spills into areas of more general thought (science, engineering, etc), the same outcome is likely there (at least for non-augmented humans).

***

Realistically speaking, if all this succeeds without leading to a collapse, there will probably be hybrid minds, and the most interesting inventions will be produced by them.

Even in such simple example as chess, where a computer program is now overwhelmingly stronger than the best human, still the strongest entity is a human-computer pair.

So while there might be not too much room for interesting new inventions by unaugmented humans in this scenario, I don't think it would mostly be inventions by purely silicon entities either.

Date: 2020-02-22 06:37 am (UTC)
dmm: (Default)
From: [personal profile] dmm
An equivalent transformation does not count ;-) Whether it is translation or speed optimization, or anything like that ;-) Those we have indeed mastered quite well long time ago ;-)

But when one can "hire a computer system for a current junior software engineering position at an average company", that will be the threshold for this particular path. The essence of those positions also have not changed for many, many decades. (Basically, when artificial programming systems reach the level of competence in their field which today's self-driving systems possess in theirs, that will be a transition point, and that will be a major revolution (if we can reach that level, that is). From that point the path will be open, for better or for worse (equivalent transformations cannot lead to unlimited self-improvement, for obvious reasons, which is why compilers and speed optimizers are excluded).)

***

Anyway, I found it quite enlightening to re-read Vinge's essay (following a link from his Wikipedia page); as context shifts with time it does read differently each time :-) So, I certainly benefited from this conversation already, because it prompted me to re-read Vinge's essay (and also to meditate on what it was like for him to compose that essay in 1993) :-) And it particular, his emphasis on the hybrid alternatives looks more and more interesting now, and puts the whole "AI safety debate" in a somewhat different light :-) So, anyway, I did benefit from this conversation, it had certainly helped my thought process :-)

I hope you also don't find the overall experience quite empty :-)

Date: 2020-02-22 07:16 am (UTC)
dmm: (Default)
From: [personal profile] dmm
> modern software engineers are already hybrids because they are joined at the hip with supercomputing systems

Yes... the problem is, the engineers are not getting any help from those supercomputing systems.

Instead, after having considerable progress in terms of the amount of help engineers were getting from the computer systems they were using, this mandatory "going to the cloud" is associated with considerable regress in convenience and engineering productivity. It almost feels that the progress of the first decade of this century achieved in this sense (how much the computer system helps a software engineer) was erased during the second decade, because of how inconvenient those cloud systems tend to be, and how much people are forced to use them... So instead of programming becoming easier, it was getting more difficult again, mostly not for fundamental reasons, but for reasons of various social pathologies (both cloud-related, and of other kinds too).

I should probably re-read Bill Joy...

Date: 2020-02-22 01:30 am (UTC)
dmm: (Default)
From: [personal profile] dmm
:-) I mean, sure I can name a decadent dream which I can't afford, and which... let's say a superrich and sufficiently influential lover (to skip all these conversations about superintelligences and parents) could make a reality :-)

:-) Like... a superrich lover could buy the best, most prominent and beautiful skyscraper in town, kick out the insurance company from it, and convert it to a 24-hour palace dedicated to electronic music, psychedelic visuals, and other decadent luxuries... and do it all for me :-)

:-) But that's different :-) Or, at least, I think that's different :-) What Nick Bostrom says, we can get an equivalent of this and more with true superintelligence, without disrupting the actual fabric of reality for everyone (if we are lucky) (a disruption which would be needed to actually implement this beautiful dream in the actual city I live in today; I supposed a Bill Gates could do something like this, if he wanted to ;-) ) :-)

Date: 2020-02-21 10:45 pm (UTC)
juan_gandhi: (Default)
From: [personal profile] juan_gandhi
Or someone's spouse.

Profile

timelets: (Default)
timelets

June 2025

S M T W T F S
123456 7
8 9 1011 1213 14
15 1617 18 192021
2223 242526 27 28
2930     

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 4th, 2025 05:43 pm
Powered by Dreamwidth Studios