timelets: (Default)
timelets ([personal profile] timelets) wrote2020-02-21 11:12 am

The last invention

Can't wait.
The arrival of superintelligence will clearly deal a heavy blow to anthropocentric worldviews. Much more important than its philosophical implications, however, would be its practical effects. Creating superintelligence may be the last invention that humans will ever need to make, since superintelligences could themselves take care of further scientific and technological development. They would do so more effectively than humans. Biological humanity would no longer be the smartest life form on the block.

--- Nick Bostrom, 2003.

https://nickbostrom.com/views/transhumanist.pdf


I wonder how it feels to be someone's dog.
dmm: (Default)

[personal profile] dmm 2020-02-22 06:37 am (UTC)(link)
An equivalent transformation does not count ;-) Whether it is translation or speed optimization, or anything like that ;-) Those we have indeed mastered quite well long time ago ;-)

But when one can "hire a computer system for a current junior software engineering position at an average company", that will be the threshold for this particular path. The essence of those positions also have not changed for many, many decades. (Basically, when artificial programming systems reach the level of competence in their field which today's self-driving systems possess in theirs, that will be a transition point, and that will be a major revolution (if we can reach that level, that is). From that point the path will be open, for better or for worse (equivalent transformations cannot lead to unlimited self-improvement, for obvious reasons, which is why compilers and speed optimizers are excluded).)

***

Anyway, I found it quite enlightening to re-read Vinge's essay (following a link from his Wikipedia page); as context shifts with time it does read differently each time :-) So, I certainly benefited from this conversation already, because it prompted me to re-read Vinge's essay (and also to meditate on what it was like for him to compose that essay in 1993) :-) And it particular, his emphasis on the hybrid alternatives looks more and more interesting now, and puts the whole "AI safety debate" in a somewhat different light :-) So, anyway, I did benefit from this conversation, it had certainly helped my thought process :-)

I hope you also don't find the overall experience quite empty :-)
dmm: (Default)

[personal profile] dmm 2020-02-22 07:16 am (UTC)(link)
> modern software engineers are already hybrids because they are joined at the hip with supercomputing systems

Yes... the problem is, the engineers are not getting any help from those supercomputing systems.

Instead, after having considerable progress in terms of the amount of help engineers were getting from the computer systems they were using, this mandatory "going to the cloud" is associated with considerable regress in convenience and engineering productivity. It almost feels that the progress of the first decade of this century achieved in this sense (how much the computer system helps a software engineer) was erased during the second decade, because of how inconvenient those cloud systems tend to be, and how much people are forced to use them... So instead of programming becoming easier, it was getting more difficult again, mostly not for fundamental reasons, but for reasons of various social pathologies (both cloud-related, and of other kinds too).

I should probably re-read Bill Joy...