timelets: (Default)
“On January 28, 1996, Ren Zhengfei held Huawei’s first “mass-resignation ceremony.” Each head of a regional sales office was told to prepare two reports: a work summary and a written resignation. “I will only sign one of the reports,” Ren said.

Huawei had started out in rural markets, and many of its early sales managers were provincial in their experience and network of contacts. As Ren sought to go national and international, he decided to make the entire sales staff resign and reapply for their jobs. “The mountain goat must outrun the lion to not be eaten,” he had told them ahead of the event. “All departments and sections must optimize and eat the lazy goats, the goats that do not learn or progress, and the goats with no sense of responsibility.”

They were following the strategy that Mao had used to win the Chinese Civil War of “encircling the cities with the countryside.”[9] They’d won over villages and towns in the beginning, building their strength to take on the big cities.

Ren told his followers that demotions built character and that the demoted would only be stronger when they worked their way up again.... “Even Deng Xiaoping could go down and up three times. Why can’t you go down and up three times?”

-- Eva Dou. “House of Huawei.”
timelets: (Default)
There's a growing understanding in the field that producing human-like texts does not imply human-like cognitive processes. Using the traditional terms like "Artificial Intelligence", "Neural Networks", etc. obscures that fact. (I wish I could up with a new term). We are developing and learning how to co-exist with new kinds of learning entities, the process that rhymes with biology, but is fundamentally different from it in the underlying substrate (what Deleuze would call "risome").
Mossing and others, both at OpenAI and at rival firms including Anthropic and Google DeepMind, are ... studying them [LLMs] as if they were doing biology or neuroscience on vast living creatures—city-size xenomorphs that have appeared in our midst.

Anthropic and others have developed tools to let them trace certain paths that activations follow, revealing mechanisms and pathways inside a model much as a brain scan can reveal patterns of activity inside a brain. Such an approach to studying the internal workings of a model is known as mechanistic interpretability. “This is very much a biological type of analysis,” says Batson. “It’s not like math or physics.”

Anthropic invented a way to make large language models easier to understand by building a special second model (using a type of neural network called a sparse autoencoder) that works in a more transparent way than normal LLMs. This second model is then trained to mimic the behavior of the model the researchers want to study.

Creating a model that behaves in predictable ways in specific scenarios requires making assumptions about what the inner state of that model might be in those scenarios. But that only works if large language models have something analogous to the mental coherence that most people do.

And that might not be the case.

...
Another possible solution ... Instead of relying on imperfect techniques for insight into what they’re doing, why not build an LLM that’s easier to understand in the first place?

https://www.technologyreview.com/2026/01/12/1129782/ai-large-language-models-biology-alien-autopsy/


The biological complexity issue is tricky because we don't want to confuse the complexity of structure with the complexity of behavior. For example, my dog is an extremely complex biological system, but getting/training her to sit is not a big deal. But as we crank up the complexity of behavior, our ability to understand and predict outcomes goes down dramatically.
timelets: (Default)
Медуза дает подробный анализ перспектив добычи нефти в Венесуэле. Оставлю здесь вывод, чтобы через год вернутъся:

Chevron снова начала работать в Венесуэле с конца 2022 года, и благодаря ее проектам добыча с тех пор действительно выросла, хотя и не особенно сильно. Более реалистичными выглядят расчеты, предполагающие ежегодный рост добычи на 200–250 тысяч баррелей в день в течение четырех-пяти лет при инвестициях от 10 миллиардов долларов в год.

Все это в основном касается традиционной нефти. Перспективы расширения добычи сверхтяжелой нефти при цене Brent 50–55 долларов за баррель совсем туманны. Для окупаемости таких проектов нужны цены на 20–30 долларов выше.
...
Возрождение нефтяной отрасли Венесуэлы не обещает мгновенных гигантских прибылей, а значит, американские нефтяные компании оказываются перед непростым выбором. У них и без Венесуэлы есть подготовленный конвейер проектов. Ресурсы — технические, человеческие, финансовые — расписаны на годы вперед.

Переключаться на Венесуэлу в такой ситуации означает отказываться от каких-то других вложений. Причем делать это придется в обстановке низких цен на нефть, когда у компаний снижен аппетит к риску и сильно желание блюсти финансовую дисциплину.


Also:

Mr. Trump seems to view the oil proceeds as a personal executive account, beyond the control of Congress’s purse-strings, to dole out as he sees fit.

It elides U.S. national security interests with Mr. Trump’s personal power, much like his gambit to let TikTok keep operating in the U.S. in violation of the law on the condition that the Treasury get a cut of its eventual sale to American investors. To the victor go the oil spoils won’t improve U.S. standing in the world, and it sends a bad message to the world’s rogues about the way to buy U.S. support.
...
U.S. companies are unlikely to invest in Venezuela until a political transition creates stability and respects property rights. Oil investment is measured in decades, not the next three years. Merely maintaining current output will require billions of dollars a year, and a hundred billion or more to return production to the levels of the early 2010s.

U.S. companies have other attractive opportunities that carry less political risk, such as offshore Guyana where ExxonMobil and Chevron are making big investments. Harold Hamm’s Continental Resources is expanding in Argentina’s Vaca Muerta shale formation. Don’t forget America’s own resources, including in the Gulf of Mexico and Alaska. Mr. Trump may be able to coerce Chevron to invest more in Venezuela, or dangle subsidies as he has suggested, but this is dubious industrial policy.
https://www.wsj.com/opinion/donald-trump-venezuela-oil-tankers-nicolas-maduro-67c4775c
timelets: (Default)
Finally, the notion that AI is not about "intelligence" is starting to percolate into the mainstream. Almost 10 years ago I argued that point but at the time it fell on deaf ears. Today, technology people still have this weird fascination with the idea that they could build "better" human minds or "improved" biological organisms, using novel hardware and software. It's a part of a greater psychological bias that prevents us from seeing the new, but with AI this mindset is particularly limiting.
...influential group of social and cognitive scientists say can help us better understand artificial intelligence. Today’s AI models are not, in their view, akin to a human mind. Rather, they’re a form of “cultural or social” technology that aggregates and passes on human knowledge — more like a printing press or even a bureaucracy or a market. If we want to understand how to manage AI, they say, we should study how we’ve handled new social technologies in the past.

Last year, Science published a version of this argument by Henry Farrell (a political scientist), Alison Gopnik (a psychologist), Cosma Shalizi (a statistician) and James Evans (a sociologist). “Beginning with language itself, human beings have had distinctive capacities to learn from the experiences of other humans and these capacities are arguably the secret of human evolutionary success,” the authors write. They go on to identify key ideas — from print to television to representative democracy — that transformed the nature of social learning by changing how societies process information.

The Science authors think we should view large language models along these lines — not as intelligence, but as a new form of cultural communication.

https://www.bloomberg.com/news/articles/2026-01-09/what-s-the-best-way-to-think-of-ai-look-to-democracy-marketplaces
timelets: (Default)
Predictions for AI trends in 2026 by the IBM Tech channel

https://youtu.be/zt0JA5rxdfM?si=y1J28AmKk_WAFk75

We'll have to come back to it in 10-11 months.
timelets: (Default)
A new crop of AI coaches promise the kind of personalization, expertise and encouragement that would come from a personal trainer, without the high price tag. I was intrigued. Could a robot fix my very human issues?
...
My ideal digital fitness platform would combine all of the above: Fitbit’s personalization and flexibility, Peloton’s accountability and Apple’s motivation. That perfect mix doesn’t exist yet, but workout apps have only just begun to embrace AI.

For now, the new Peloton AI is the best bot-powered system of the bunch, if you can stomach the economics.

https://www.wsj.com/tech/personal-tech/ai-fitness-coach-1ca345ec

This technology development has far greater implications for the future than Maduro's capture, esp. long-term; nevertheless, people will pay a lot of attention to images of a wannabe dictator of a very powerful country directing a successful operation against a real dictator of an insignificant country. Why this stupidity? Why people in power or celebrities in general attract more attention than powerful background processes of change?

TIL

Jan. 1st, 2026 04:12 pm
timelets: (Default)
Sergey Brin spends significant amount of time chatting with Gemini.


timelets: (Default)
“Books also had a “shelf life.” In a seventeenth-century bookseller’s shop, they could wait patiently for readers to come and purchase them. But staged plays were big events that happened at set times. They required an immense investment of both funds and labor: a paid company of actors and a theater, which must be built, purchased, or rented. They also needed to bring in the broadest cross-section of society if they were going to meet expenses. This difference in the technology and marketing of these two narrative media has only grown with time. ”

-- Abbott, H. Porter. “The Cambridge Introduction to Narrative (Cambridge Introductions to Literature).”


I wonder whether AI could close this gap.

p.s. a little bit later:
“...once revealed, the action of the story of the murder of Councilman Stubbs can be described in “terms of a linear chain: A->B->C->D (where D is the Death of Stubbs).
...
Characters are, usually, harder to understand than actions. They are themselves some of narrative’s most challenging gaps.
...
..we have to move from a horizontal to a vertical analysis, descending into the character to construct a plausible sense of her complexity.”

“The model, then, for the construct“tion of character in fictional narrative might look something like this:

reader/viewer + narrative -> reader/viewer’s construction of a character



In this view, a narrative can be represented by a product of Action and Character.
timelets: (Default)
“the way we usually behave when we interpret: that is, we usually assume that a narrative, like a sentence, comes from someone bent on communicating. The novelist Paul Auster put it simply: “In a work of fiction, one assumes there is a conscious mind behind the words on the page.”

--- Abbott, H. Porter. “The Cambridge Introduction to Narrative (Cambridge Introductions to Literature).”

>> this aasumption is no longer true. Copernicus showed that humans are not at the center of the universe. Darwin showed that humans are not at the center of biological world. LLMs show that humans are not at the center of the world of communications.

timelets: (Default)
“powerful narratives, don’t tell us what to think but cause us to think. Narrative as such, to borrow a line from I. A. Richards, is a “machine to think with.”

-- Abbott, H. Porter. “The Cambridge Introduction to Narrative (Cambridge Introductions to Literature).”
timelets: (Default)
Elon Musk’s artificial intelligence startup xAI is burning through $1 billion a month as the cost of building its advanced AI models

https://www.bloomberg.com/news/articles/2025-06-17/musk-s-xai-burning-through-1-billion-a-month-as-costs-pile-up
timelets: (Default)
The princes of the Catholic Church listened intently as Pope Leo XIV laid out his priorities for the first time, revealing that he had chosen his papal name because of the tech revolution. As he explained, his namesake Leo XIII stood up for the rights of factory workers during the Gilded Age, when industrial robber barons presided over rapid change and extreme inequality.

“Today, the church offers its trove of social teaching to respond to another industrial revolution and to innovations in the field of artificial intelligence that pose challenges to human dignity, justice and labor,” Leo XIV told the College of Cardinals, who stood and cheered for their new pontiff and his unlikely cause.

https://www.wsj.com/tech/ai/pope-leo-ai-tech-771cca48


This is quite unexpected, although it seems quite logical in the context of Harari's Nexus
timelets: (Default)
I find it really productive to think about different types of AI/ML using one of Whitehead's approaches. For example, he describes three elements of any event: physical prehension (objective data), cognitive prehension (eternal possibilities), decision. Implicitly, when we talk about an agent, the fourth element is realization, i.e. action on the decision. (upd. he also has _subjective aim_, which is the target of the decision).

Each of these elements carries modes of interaction that are fundamentally different for humans and ML. First of all, physical prehension, i.e. awareness of the world, involves radically different sensory methods. Furthermore, while human awareness relies on analog biological, evolutionary fixed senses, ML can access digital non-biological signals. Further, its awareness can be retrained on new data gathering methods. Moreover, various types of ML can be trained and retrained on new senses just like we train dogs for tracking. The possibilities for creating new types of world awareness are mind boggling.

Etc, etc, etc.

Also, Whitehead's theology is highly applicable to this subject, but I'd need more time to study and think about it.

upd: transparency of prehension would be a good topic on which to "compare and contrast"
timelets: (Default)
К сожалению, жабогадюкинг между Т и М оказался очень коротким — М позорно слился. Еще раз убеждаемся, что каким бы гениальным ни был ученый, предприниматель или энтрепренер, против власти идти очень непросто, даже с огромными деньгами. И до Сахарова Маску, как до Юпитера.
timelets: (Default)
Most discussions about AI, especially AGI, suffer from what Whitehead would call the Fallacy of Misplaced Concreteness. That is, people assume that Intelligence is something concrete existing in Nature that can be easily pointed to and described. Instead, we have a broad range of definitions covering various bundles of human and/or computer capabilities.
By contrast, discussions about industrial robots, including drones and autonomous cars, are usually much more productive because their roles are well specified in terms of tasks and accomplishments.

timelets: (Default)
Thanks to the book I'm reading now, I've discovered the true identity of Elon Musk. He's an AI bot that, like Tay, was trained on Twitter. Tay was shut down by Microsoft when it became clear that it developed psychopathic, dictatorial tendencies. By contrast, Elon bought Twitter, so that nobody could shut him down:

“stories of Tay,2 the Twitter bot that Microsoft created and swiftly shut down after it turned into a Hitler-loving, non-consensual-sex-promoting bot. Tay was modelled to speak ‘like a teen girl’. The bot began to post inflammatory and offensive tweets through its Twitter account, forcing Microsoft to shut down the service only sixteen hours after its launch. According to Microsoft, this was caused by trolls – people who deliberately start quarrels or upset others on the internet – who ‘attacked’ the service as the bot was making its replies based on its interactions with people on Twitter.”

Excerpt From: Mo Gawdat. “Scary Smart.”


p.s. this is not a joke
timelets: (Default)
Read more... )
However, this would require shifting incentives away from maximizing engagement and toward epistemic responsibility, which is difficult given current business models.

====
It would be an interesting challenge to come with up a technology and a business model that solves the problem.

Also related https://youtu.be/qlPHGnChhI4?si=03mDoaAYAFJnEfCE&t=4004

truth conditions (theoretical intentionality) vs satisfaction conditions (practical intentionality)

Profile

timelets: (Default)
timelets

January 2026

S M T W T F S
     1 2 3
4 5 67 8 9 10
1112 13 14 15 16 17
18192021222324
25262728293031

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jan. 19th, 2026 08:46 pm
Powered by Dreamwidth Studios