timelets: (Default)
[personal profile] timelets
wrt https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities

via https://avva.dreamwidth.org/3291499.html

Actually, I'm on the Yudkowsky side of this debate from a purely pragmatic risk management perspective – methods to evaluate AGI-related risks are practically non-existent. It's a problem.

In addition, from what I've seen so far, on the most basic physical level ML-native systems perceive, i.e. "feel", the world differently than humans. This particular divergence is only going to grow over time and, eventually, the species-level gap is going to be huge. Therefore, there's a big chance that an AGI system will not understand humans at all — along the lines of Nagel's What it's like to be a bat? We don't kill bats because we don't understand them but we have no way to guarantee the same for AGI. Yet. Hopefully.

Finally, the objection avva references in response to Yudkowsky' concerns is intellectually inept, which only highlights the depth of the problem.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

Profile

timelets: (Default)
timelets

June 2025

S M T W T F S
123456 7
8 91011121314
15161718192021
22232425262728
2930     

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 10th, 2025 01:25 am
Powered by Dreamwidth Studios