(no subject)
May. 30th, 2023 07:29 pmTwitter is the largest individual value destruction since the dotcom bust.
Twitter is now worth just one-third of what Elon Musk paid for the social-media platform
https://finance.yahoo.com/news/twitter-now-worth-just-33-185709652.html
On the other hand,
Buying Twitter with somebody else's money makes perfect sense from the risk management perspective.
Twitter is now worth just one-third of what Elon Musk paid for the social-media platform
https://finance.yahoo.com/news/twitter-now-worth-just-33-185709652.html
On the other hand,
The latest markdown erases about $850 million from Musk’s $187 billion fortune, according to the index. Despite Twitter’s issues, Musk’s wealth is up more than $48 billion this year, largely due to a 63% surge in Tesla Inc.’s share price.
Buying Twitter with somebody else's money makes perfect sense from the risk management perspective.
(no subject)
Jun. 9th, 2022 03:29 pmwrt https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities
via https://avva.dreamwidth.org/3291499.html
Actually, I'm on the Yudkowsky side of this debate from a purely pragmatic risk management perspective – methods to evaluate AGI-related risks are practically non-existent. It's a problem.
In addition, from what I've seen so far, on the most basic physical level ML-native systems perceive, i.e. "feel", the world differently than humans. This particular divergence is only going to grow over time and, eventually, the species-level gap is going to be huge. Therefore, there's a big chance that an AGI system will not understand humans at all — along the lines of Nagel's What it's like to be a bat? We don't kill bats because we don't understand them but we have no way to guarantee the same for AGI. Yet. Hopefully.
Finally, the objection avva references in response to Yudkowsky' concerns is intellectually inept, which only highlights the depth of the problem.
via https://avva.dreamwidth.org/3291499.html
Actually, I'm on the Yudkowsky side of this debate from a purely pragmatic risk management perspective – methods to evaluate AGI-related risks are practically non-existent. It's a problem.
In addition, from what I've seen so far, on the most basic physical level ML-native systems perceive, i.e. "feel", the world differently than humans. This particular divergence is only going to grow over time and, eventually, the species-level gap is going to be huge. Therefore, there's a big chance that an AGI system will not understand humans at all — along the lines of Nagel's What it's like to be a bat? We don't kill bats because we don't understand them but we have no way to guarantee the same for AGI. Yet. Hopefully.
Finally, the objection avva references in response to Yudkowsky' concerns is intellectually inept, which only highlights the depth of the problem.
(no subject)
Dec. 22nd, 2020 09:33 pmFor neither is a man to be blamed for shunning death, if he does not cling to life disgracefully, nor to be praised for boldly meeting death, if he does this with contempt of life. For this reason Homer always brings his boldest and most valiant heroes into battle well armed and equipped; and the Greek lawgivers punish him who casts away his shield, not him who throws down his sword or spear, thus teaching that his own defence from harm, rather than the infliction of harm upon the enemy, should be every man's first care, and particularly if he governs a city or commands an army.
--- Plutarch, The Lives. Pelopidas: 1.
(no subject)
Jul. 19th, 2018 03:02 pmhttps://avva.livejournal.com/3132605.html
re: the gap b/w sw professionals and users.
http://spw18.langsec.org/papers/mike-walker-keynote-langsec2018.txt
re: the gap b/w sw professionals and users.
http://spw18.langsec.org/papers/mike-walker-keynote-langsec2018.txt