timelets: (Default)
[personal profile] timelets
wrt https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities

via https://avva.dreamwidth.org/3291499.html

Actually, I'm on the Yudkowsky side of this debate from a purely pragmatic risk management perspective – methods to evaluate AGI-related risks are practically non-existent. It's a problem.

In addition, from what I've seen so far, on the most basic physical level ML-native systems perceive, i.e. "feel", the world differently than humans. This particular divergence is only going to grow over time and, eventually, the species-level gap is going to be huge. Therefore, there's a big chance that an AGI system will not understand humans at all — along the lines of Nagel's What it's like to be a bat? We don't kill bats because we don't understand them but we have no way to guarantee the same for AGI. Yet. Hopefully.

Finally, the objection avva references in response to Yudkowsky' concerns is intellectually inept, which only highlights the depth of the problem.

Profile

timelets: (Default)
timelets

June 2025

S M T W T F S
123456 7
8 9 1011 1213 14
15 1617 18 192021
2223 242526 27 28
2930     

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 1st, 2025 12:44 pm
Powered by Dreamwidth Studios