It appears that Bostrom envisions superintelligent transhumans as ageless post-biological beings who reinvent and re-implement their physical aspects as needed.
Meanwhile, I've re-read Bill Joy, "Why the future does not need us" (thanks!), and checked what happened to its "main characters". The most interesting and unexpected was evolution of Eric Drexler. It seems that he bought the idea of "AI first", fully switched from nano to AI and superintelligence, and started to focus on splitting superintelligence facilities from agency (quite successfully, I would say; his recent designs look like one of the better contributions to AI safety; he is a research fellow at the Bostrom's organization these days).
Nanotechnology turned out to be completely different from what people like Drexler envisioned in the end of the 20th century. He [and many others] have learned that the idea of direct replacement is not viable. It seems that they are now thinking about AI as not a substitute for the human brain. Great!
BTW, what's AI nowadays? From what I know, the "AI" label covers a broad range of technologies that makes it almost meaningless in a discussion between intelligent beings.
Yes, unqualified "AI" does not mean much, it's too broad.
There is state-of-the-art "narrow AI" (which is a moving target), there is a notion of "general AI" ("artificial general intelligence") , which is, perhaps, possible in the future, and which supposed to be "not worse than a human for any given task" (whether that is desirable is another question), and then there is a notion of possible subsequent stages.
But Drexler seems to approach this a bit differently (and to question the value of "general AI", while still wanting to provide superpowerful R&D accelerators, which is what we really need, after all; that's where our obvious bottleneck is).
I can try (I've only skimmed it, it's really a 200-page book, and his previous 2015 short paper was very tempting, but too difficult to understand; I made myself a to-do list item to really understand the details of this one).
***
He is trying to describe a set-up when a superintelligent installation would be reasonably usable and safe in a local sense (meaning that this particular installation would not "go crazy" in a dangerous way; how this fits in a global dynamics is mostly out of scope of his analysis, although people elsewhere seem to think that his scheme might be one of the most promising from the global dynamics viewpoint).
He wants "specialized superintelligence". He even has some ideas of how to have controlled self-modification in safe way (I'd like to understand the details of those better).
The components might be sufficient for "general intelligence" or not, but even if they are sufficient, we don't have to build "general intelligence" out of them. Instead we can have a variety of supercompetent specialized services.
The whole push is towards "de-anthropomorphisation" - decouple things which are conflated because they are tightly coupled inside humans. E.g. decouple learning capabilities and actual competences. Decouple agency from capability.
But I am scratching the surface here - when I look further at this text, I see so much tempting tidbits (so I hope to actually read this "book" in full and in details, and it seems from my preliminary impression that unlike his 2015 paper, this would be much easier to understand).
I see. If I understand you correctly, the idea is to apply and test the new capability in tool development first before sending it into the wild on a large scale.
I don't even know if they would want to send it into the wild (but that might not to be up to them).
They think (Drexler and people who seem to be ready to build upon that) that they might have identified a path forward which is relatively safe and quite far-reaching (this path of super-smart tools which we keep improving (using those tools themselves to improve them further) might be "all we need" in some sense), but they are not the only players here.
I've seen a paper trying to analyze various scenarios of how things might play globally, but I am not sure whether it is a productive reading (it's definitely quite unpleasant, with all the scenarios they look at ("Global Solutions vs. Local Solutions for the AI Safety Problem" by Turchin et al)). That particular paper seemed to be saying that scaling up Drexler's architecture might be one of our best bets.
no subject
Date: 2020-02-23 03:36 pm (UTC)***
Meanwhile, I've re-read Bill Joy, "Why the future does not need us" (thanks!), and checked what happened to its "main characters". The most interesting and unexpected was evolution of Eric Drexler. It seems that he bought the idea of "AI first", fully switched from nano to AI and superintelligence, and started to focus on splitting superintelligence facilities from agency (quite successfully, I would say; his recent designs look like one of the better contributions to AI safety; he is a research fellow at the Bostrom's organization these days).
no subject
Date: 2020-02-23 07:23 pm (UTC)I'm reading his paper now https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf
It looks like they take the current development of industrial AI as a service and try to extrapolate the trend.
BTW, what's AI nowadays? From what I know, the "AI" label covers a broad range of technologies that makes it almost meaningless in a discussion between intelligent beings.
no subject
Date: 2020-02-23 08:22 pm (UTC)***
Yes, unqualified "AI" does not mean much, it's too broad.
There is state-of-the-art "narrow AI" (which is a moving target), there is a notion of "general AI" ("artificial general intelligence") , which is, perhaps, possible in the future, and which supposed to be "not worse than a human for any given task" (whether that is desirable is another question), and then there is a notion of possible subsequent stages.
But Drexler seems to approach this a bit differently (and to question the value of "general AI", while still wanting to provide superpowerful R&D accelerators, which is what we really need, after all; that's where our obvious bottleneck is).
no subject
Date: 2020-02-24 03:37 am (UTC)no subject
Date: 2020-02-24 06:06 pm (UTC)***
He is trying to describe a set-up when a superintelligent installation would be reasonably usable and safe in a local sense (meaning that this particular installation would not "go crazy" in a dangerous way; how this fits in a global dynamics is mostly out of scope of his analysis, although people elsewhere seem to think that his scheme might be one of the most promising from the global dynamics viewpoint).
He wants "specialized superintelligence". He even has some ideas of how to have controlled self-modification in safe way (I'd like to understand the details of those better).
The components might be sufficient for "general intelligence" or not, but even if they are sufficient, we don't have to build "general intelligence" out of them. Instead we can have a variety of supercompetent specialized services.
The whole push is towards "de-anthropomorphisation" - decouple things which are conflated because they are tightly coupled inside humans. E.g. decouple learning capabilities and actual competences. Decouple agency from capability.
But I am scratching the surface here - when I look further at this text, I see so much tempting tidbits (so I hope to actually read this "book" in full and in details, and it seems from my preliminary impression that unlike his 2015 paper, this would be much easier to understand).
no subject
Date: 2020-02-24 08:57 pm (UTC)no subject
Date: 2020-02-24 10:53 pm (UTC)I don't even know if they would want to send it into the wild (but that might not to be up to them).
They think (Drexler and people who seem to be ready to build upon that) that they might have identified a path forward which is relatively safe and quite far-reaching (this path of super-smart tools which we keep improving (using those tools themselves to improve them further) might be "all we need" in some sense), but they are not the only players here.
I've seen a paper trying to analyze various scenarios of how things might play globally, but I am not sure whether it is a productive reading (it's definitely quite unpleasant, with all the scenarios they look at ("Global Solutions vs. Local Solutions for the AI Safety Problem" by Turchin et al)). That particular paper seemed to be saying that scaling up Drexler's architecture might be one of our best bets.