It appears that Bostrom envisions superintelligent transhumans as ageless post-biological beings who reinvent and re-implement their physical aspects as needed.
I don't even know if they would want to send it into the wild (but that might not to be up to them).
They think (Drexler and people who seem to be ready to build upon that) that they might have identified a path forward which is relatively safe and quite far-reaching (this path of super-smart tools which we keep improving (using those tools themselves to improve them further) might be "all we need" in some sense), but they are not the only players here.
I've seen a paper trying to analyze various scenarios of how things might play globally, but I am not sure whether it is a productive reading (it's definitely quite unpleasant, with all the scenarios they look at ("Global Solutions vs. Local Solutions for the AI Safety Problem" by Turchin et al)). That particular paper seemed to be saying that scaling up Drexler's architecture might be one of our best bets.
no subject
Date: 2020-02-24 10:53 pm (UTC)I don't even know if they would want to send it into the wild (but that might not to be up to them).
They think (Drexler and people who seem to be ready to build upon that) that they might have identified a path forward which is relatively safe and quite far-reaching (this path of super-smart tools which we keep improving (using those tools themselves to improve them further) might be "all we need" in some sense), but they are not the only players here.
I've seen a paper trying to analyze various scenarios of how things might play globally, but I am not sure whether it is a productive reading (it's definitely quite unpleasant, with all the scenarios they look at ("Global Solutions vs. Local Solutions for the AI Safety Problem" by Turchin et al)). That particular paper seemed to be saying that scaling up Drexler's architecture might be one of our best bets.