timelets: (Default)
[personal profile] timelets
Can't wait.
The arrival of superintelligence will clearly deal a heavy blow to anthropocentric worldviews. Much more important than its philosophical implications, however, would be its practical effects. Creating superintelligence may be the last invention that humans will ever need to make, since superintelligences could themselves take care of further scientific and technological development. They would do so more effectively than humans. Biological humanity would no longer be the smartest life form on the block.

--- Nick Bostrom, 2003.

https://nickbostrom.com/views/transhumanist.pdf


I wonder how it feels to be someone's dog.

Date: 2020-02-22 03:02 am (UTC)
dmm: (Default)
From: [personal profile] dmm
Well, we certainly have not invented superintelligence yet. Whether it is feasible at all remains to be seen; the answer to that question is not obvious at all.

***

I think the most likely path (among a variety of paths people considered so far) remains a path via creation of a new kind of silicon-based life. The standard pathway which seems to me to be the easiest and the likeliest is the main pathway outlined in 1993 essay by Vernor Vinge, "The Coming Technological Singularity: How to Survive in the Post-Human Era". Roughly speaking, one starts with the task of creating an artificial software engineer, and one proceeds towards having a situation where further software engineering efforts can improve the capabilities of that artificial software engineer. With some luck, one would be able to get a superpowerful artificial software engineer along this path, by letting better and better artificial software engineers to work on creation of better and better artificial software engineers.

If this path is successful, it seems that it would naturally spill into other scientific and technological human endeavors, such as theoretical research in math and physics, design and operation of experimental equipment and experimental science, etc.

Obviously, there is a large body of discussion where people are trying to study all this more closely: whether this is feasible at all or not, if this is feasible what are the benefits and dangers, and if so what are our chances to navigate this safely and methods to do so, and so on. For many of those people who think this is feasible, the mind-body problem and the question of what kind of entities have a first-person experience and what kind of experience that might be is important (and the progress of our understanding of the mind-body problem is next to non-existent).

This is a relatively large field of study these days (20 years ago it was a very small field).

***

But setting aside all this, and not trying to reproduce thousands of pages of discourse arguing all sides of this subject, and keeping an open mind about whether this is a feasible direction or not, I think one can still say the following.

If this program of creating a superpowerful artificial software engineer succeeds, it is likely that classical human invention in that particular area will become negligibly small (we see these trends already in more specialized fields, such as certain games - chess, Go, etc). If this program spills into areas of more general thought (science, engineering, etc), the same outcome is likely there (at least for non-augmented humans).

***

Realistically speaking, if all this succeeds without leading to a collapse, there will probably be hybrid minds, and the most interesting inventions will be produced by them.

Even in such simple example as chess, where a computer program is now overwhelmingly stronger than the best human, still the strongest entity is a human-computer pair.

So while there might be not too much room for interesting new inventions by unaugmented humans in this scenario, I don't think it would mostly be inventions by purely silicon entities either.

Date: 2020-02-22 06:37 am (UTC)
dmm: (Default)
From: [personal profile] dmm
An equivalent transformation does not count ;-) Whether it is translation or speed optimization, or anything like that ;-) Those we have indeed mastered quite well long time ago ;-)

But when one can "hire a computer system for a current junior software engineering position at an average company", that will be the threshold for this particular path. The essence of those positions also have not changed for many, many decades. (Basically, when artificial programming systems reach the level of competence in their field which today's self-driving systems possess in theirs, that will be a transition point, and that will be a major revolution (if we can reach that level, that is). From that point the path will be open, for better or for worse (equivalent transformations cannot lead to unlimited self-improvement, for obvious reasons, which is why compilers and speed optimizers are excluded).)

***

Anyway, I found it quite enlightening to re-read Vinge's essay (following a link from his Wikipedia page); as context shifts with time it does read differently each time :-) So, I certainly benefited from this conversation already, because it prompted me to re-read Vinge's essay (and also to meditate on what it was like for him to compose that essay in 1993) :-) And it particular, his emphasis on the hybrid alternatives looks more and more interesting now, and puts the whole "AI safety debate" in a somewhat different light :-) So, anyway, I did benefit from this conversation, it had certainly helped my thought process :-)

I hope you also don't find the overall experience quite empty :-)

Date: 2020-02-22 07:16 am (UTC)
dmm: (Default)
From: [personal profile] dmm
> modern software engineers are already hybrids because they are joined at the hip with supercomputing systems

Yes... the problem is, the engineers are not getting any help from those supercomputing systems.

Instead, after having considerable progress in terms of the amount of help engineers were getting from the computer systems they were using, this mandatory "going to the cloud" is associated with considerable regress in convenience and engineering productivity. It almost feels that the progress of the first decade of this century achieved in this sense (how much the computer system helps a software engineer) was erased during the second decade, because of how inconvenient those cloud systems tend to be, and how much people are forced to use them... So instead of programming becoming easier, it was getting more difficult again, mostly not for fundamental reasons, but for reasons of various social pathologies (both cloud-related, and of other kinds too).

I should probably re-read Bill Joy...

Profile

timelets: (Default)
timelets

July 2025

S M T W T F S
  1234 5
6789101112
13141516171819
20212223242526
2728293031  

Most Popular Tags

Page Summary

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 7th, 2025 11:07 pm
Powered by Dreamwidth Studios