timelets: (Default)
[personal profile] timelets
It appears that Bostrom envisions superintelligent transhumans as ageless post-biological beings who reinvent and re-implement their physical aspects as needed.

Date: 2020-02-23 03:36 pm (UTC)
dmm: (Default)
From: [personal profile] dmm
Yes, that's the dream - to become one of those...

***

Meanwhile, I've re-read Bill Joy, "Why the future does not need us" (thanks!), and checked what happened to its "main characters". The most interesting and unexpected was evolution of Eric Drexler. It seems that he bought the idea of "AI first", fully switched from nano to AI and superintelligence, and started to focus on splitting superintelligence facilities from agency (quite successfully, I would say; his recent designs look like one of the better contributions to AI safety; he is a research fellow at the Bostrom's organization these days).

Date: 2020-02-23 08:22 pm (UTC)
dmm: (Default)
From: [personal profile] dmm
Yes, this is his paper which I found inspiring...

***

Yes, unqualified "AI" does not mean much, it's too broad.

There is state-of-the-art "narrow AI" (which is a moving target), there is a notion of "general AI" ("artificial general intelligence") , which is, perhaps, possible in the future, and which supposed to be "not worse than a human for any given task" (whether that is desirable is another question), and then there is a notion of possible subsequent stages.

But Drexler seems to approach this a bit differently (and to question the value of "general AI", while still wanting to provide superpowerful R&D accelerators, which is what we really need, after all; that's where our obvious bottleneck is).

Date: 2020-02-24 06:06 pm (UTC)
dmm: (Default)
From: [personal profile] dmm
I can try (I've only skimmed it, it's really a 200-page book, and his previous 2015 short paper was very tempting, but too difficult to understand; I made myself a to-do list item to really understand the details of this one).

***

He is trying to describe a set-up when a superintelligent installation would be reasonably usable and safe in a local sense (meaning that this particular installation would not "go crazy" in a dangerous way; how this fits in a global dynamics is mostly out of scope of his analysis, although people elsewhere seem to think that his scheme might be one of the most promising from the global dynamics viewpoint).

He wants "specialized superintelligence". He even has some ideas of how to have controlled self-modification in safe way (I'd like to understand the details of those better).

The components might be sufficient for "general intelligence" or not, but even if they are sufficient, we don't have to build "general intelligence" out of them. Instead we can have a variety of supercompetent specialized services.

The whole push is towards "de-anthropomorphisation" - decouple things which are conflated because they are tightly coupled inside humans. E.g. decouple learning capabilities and actual competences. Decouple agency from capability.

But I am scratching the surface here - when I look further at this text, I see so much tempting tidbits (so I hope to actually read this "book" in full and in details, and it seems from my preliminary impression that unlike his 2015 paper, this would be much easier to understand).
Edited Date: 2020-02-24 06:08 pm (UTC)

Date: 2020-02-24 10:53 pm (UTC)
dmm: (Default)
From: [personal profile] dmm
Yes.

I don't even know if they would want to send it into the wild (but that might not to be up to them).

They think (Drexler and people who seem to be ready to build upon that) that they might have identified a path forward which is relatively safe and quite far-reaching (this path of super-smart tools which we keep improving (using those tools themselves to improve them further) might be "all we need" in some sense), but they are not the only players here.

I've seen a paper trying to analyze various scenarios of how things might play globally, but I am not sure whether it is a productive reading (it's definitely quite unpleasant, with all the scenarios they look at ("Global Solutions vs. Local Solutions for the AI Safety Problem" by Turchin et al)). That particular paper seemed to be saying that scaling up Drexler's architecture might be one of our best bets.

Profile

timelets: (Default)
timelets

June 2025

S M T W T F S
123456 7
8 9 1011121314
15161718192021
22232425262728
2930     

Most Popular Tags

Page Summary

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 11th, 2025 10:55 am
Powered by Dreamwidth Studios