(no subject)
Jun. 9th, 2022 03:29 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
wrt https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities
via https://avva.dreamwidth.org/3291499.html
Actually, I'm on the Yudkowsky side of this debate from a purely pragmatic risk management perspective – methods to evaluate AGI-related risks are practically non-existent. It's a problem.
In addition, from what I've seen so far, on the most basic physical level ML-native systems perceive, i.e. "feel", the world differently than humans. This particular divergence is only going to grow over time and, eventually, the species-level gap is going to be huge. Therefore, there's a big chance that an AGI system will not understand humans at all — along the lines of Nagel's What it's like to be a bat? We don't kill bats because we don't understand them but we have no way to guarantee the same for AGI. Yet. Hopefully.
Finally, the objection avva references in response to Yudkowsky' concerns is intellectually inept, which only highlights the depth of the problem.
via https://avva.dreamwidth.org/3291499.html
Actually, I'm on the Yudkowsky side of this debate from a purely pragmatic risk management perspective – methods to evaluate AGI-related risks are practically non-existent. It's a problem.
In addition, from what I've seen so far, on the most basic physical level ML-native systems perceive, i.e. "feel", the world differently than humans. This particular divergence is only going to grow over time and, eventually, the species-level gap is going to be huge. Therefore, there's a big chance that an AGI system will not understand humans at all — along the lines of Nagel's What it's like to be a bat? We don't kill bats because we don't understand them but we have no way to guarantee the same for AGI. Yet. Hopefully.
Finally, the objection avva references in response to Yudkowsky' concerns is intellectually inept, which only highlights the depth of the problem.