timelets: (Default)
Jamie Dimond of JPM about AI in his business and beyond

timelets: (Default)
There's a growing understanding in the field that producing human-like texts does not imply human-like cognitive processes. Using the traditional terms like "Artificial Intelligence", "Neural Networks", etc. obscures that fact. (I wish I could up with a new term). We are developing and learning how to co-exist with new kinds of learning entities, the process that rhymes with biology, but is fundamentally different from it in the underlying substrate (what Deleuze would call "risome").
Mossing and others, both at OpenAI and at rival firms including Anthropic and Google DeepMind, are ... studying them [LLMs] as if they were doing biology or neuroscience on vast living creatures—city-size xenomorphs that have appeared in our midst.

Anthropic and others have developed tools to let them trace certain paths that activations follow, revealing mechanisms and pathways inside a model much as a brain scan can reveal patterns of activity inside a brain. Such an approach to studying the internal workings of a model is known as mechanistic interpretability. “This is very much a biological type of analysis,” says Batson. “It’s not like math or physics.”

Anthropic invented a way to make large language models easier to understand by building a special second model (using a type of neural network called a sparse autoencoder) that works in a more transparent way than normal LLMs. This second model is then trained to mimic the behavior of the model the researchers want to study.

Creating a model that behaves in predictable ways in specific scenarios requires making assumptions about what the inner state of that model might be in those scenarios. But that only works if large language models have something analogous to the mental coherence that most people do.

And that might not be the case.

...
Another possible solution ... Instead of relying on imperfect techniques for insight into what they’re doing, why not build an LLM that’s easier to understand in the first place?

https://www.technologyreview.com/2026/01/12/1129782/ai-large-language-models-biology-alien-autopsy/


The biological complexity issue is tricky because we don't want to confuse the complexity of structure with the complexity of behavior. For example, my dog is an extremely complex biological system, but getting/training her to sit is not a big deal. But as we crank up the complexity of behavior, our ability to understand and predict outcomes goes down dramatically.
timelets: (Default)
Read more... )
However, this would require shifting incentives away from maximizing engagement and toward epistemic responsibility, which is difficult given current business models.

====
It would be an interesting challenge to come with up a technology and a business model that solves the problem.

Also related https://youtu.be/qlPHGnChhI4?si=03mDoaAYAFJnEfCE&t=4004

truth conditions (theoretical intentionality) vs satisfaction conditions (practical intentionality)
timelets: (Default)
It often happens, therefore, that in criticising a learned book of
applied mathematics, or a memoir, one’s whole trouble is with the first
chapter, or even with the first page. For it is there, at the very
outset, where the author will probably be found to slip in his
assumptions. Farther, the trouble is not with what the author does say,
but with what he does not say. Also it is not with what he knows he has
assumed, but with what he has unconsciously assumed. We do not doubt the
author’s honesty. It is his perspicacity which we are criticising. Each
generation criticises the unconscious assumptions made by its parents.
It may assent to them, but it brings them out in the open.

Whitehead. Science in the modern world, 1925.
timelets: (Default)
BEIJING—Microsoft is asking hundreds of employees in its China-based cloud-computing and artificial-intelligence operations to consider transferring outside the country, as tensions between Washington and Beijing mount around the critical technology.

...700 to 800 people, who are involved in machine learning and other work related to cloud computing, one of the people said.

https://www.wsj.com/tech/ai/microsoft-asks-hundreds-of-china-based-ai-staff-to-relocate-amid-u-s-china-tensions-b626ff8c
timelets: (Default)
emphasized communicability:

But a second, especially crucial role of models, drawings, and computer graphics is to make explicit a relationship that you have found, enabling other people to see it as well. This often can be done just by making the relevant part a heavier line or a brighter color, or by deleting most of everything else, but it always requires explicit effort.

Richardson, Jane and Richardson, David C. (1992), ‘looking at proteins’, Biophysics Journal, 63, pp. 1186–209.

quoted from Drawing Processes of Life: Molecules, Cells, Organisms. 2024.

timelets: (Default)
...the hierarchy of praxis that Ricoeur presents are (1) practices, (2) life plans and (3) narrative unity of life.

--- Wessel Reijers, Mark Coeckelbergh. Narrative and Technology Ethics.


So far, the AI community, esp. its business leaders, has failed to address these three essential narratives of praxis for people outside of the community itself. If you are a truck driver, there's nothing for you in the AI-powered future. By contrast, the web, the iPhone and even social networks accomplished the task very well.
timelets: (Default)

(4) the irreproducible "initial" observation, which cannot be clearly seen in retrospect, constituting a chaos; (5) the slow and laborious revelation and awareness of "what one actually sees" or the gaining of experience; (6) that what has been revealed and concisely summarized in a scientific statement is an artificial structure, related but only genetically so, both to the original intention and to the substance of the "first" observation. The original observation need not even belong to the same class as that of the facts it led toward.
...
Direct perception of form [Gestaltsehen] requires being experienced in the relevant field of thought. The ability directly to perceive meaning, form, and self-contained unity is acquired only after much experience, perhaps with preliminary training. At the same time, of course, we lose the ability to see something that contradicts the form.

--- Ludwik Fleck, Genesis and development of a scientific fact.


Since the initial chaos is rarely, if ever, documented, human creativity my not be accessible to various machine learning methods (unless we have 24/7 brain and activity monitors). On the other hand, the process of gaining experience can be accelerated, while a creative breakthrough has been achieved.

When applied to science practice, the gestalt theory is simply wrong.

WRT gaining experience, see LRRH.
timelets: (Default)
ML-assisted perception is becoming routine. In the 21st century, the technology revolution that started with X-rays is going to help humanity overcome severe limitations imposed on inference-making by human senses.
timelets: (Default)
“A learning instinct, as he[Peter Marler] meant it, isn’t an indiscriminate disposition to learn anything; it is an evolved disposition to acquire a given type of knowledge, such as songs (for birds) or language (for humans). A learning instinct not only targets a specific learning goal, it also provides the instinctive learner with appropriate perceptual and inferential mechanisms to extract the right kind of knowledge from the right kind of evidence.”
...
“How do learning instincts take advantage of experience to produce mature cognitive mechanisms?”

Dan Sperber. “The Enigma of Reason: A New Theory of Human Understanding.”
timelets: (Default)
What makes it the case that something counts as a form of cognitive success? For instance, why think that knowing the capital of Pakistan is a cognitive success, rather than just another cognitive state that an agent can occupy, like having 70% confidence that Islamabad is the capital of Pakistan? Not every cognitive state enjoys cognitive success. Knowing, understanding, mastering—these are cognitive successes.

[e.g]...we describe a person as “knowing” something as a way of signaling that her testimony with respect to that thing is to be trusted.

https://plato.stanford.edu/entries/epistemology/


Machines can now be trained to generate information traditionally associated with cognitive successes by humans. Today, we know relatively well what kind of cognitive success is achievable for a majority humans. We are still in the beginning of the process of building cognitively successful computers. We simply don't know what they are capable of when trained by a small number of highly successful humans.
timelets: (Default)
Красная Шапочка может долго спрашивать волка "Почему у тебя...", но пока он ее не съест, она ничему не научится.

TIL

Dec. 29th, 2022 01:02 pm
timelets: (Default)
I've played with ChatGPT a bit and find it to be a useful writing/editing assistant. It definitely helps generate various versions of paragraphs that would've been difficult for me to produce on my own.

In the meantime, the bot has a limitation that Google or somebody's got to exploit for business.

"I'm sorry, but I am unable to provide information about events that occurred after my knowledge cutoff in 2021. As a language model, I do not have the ability to browse the internet or access current events. My responses are based solely on the information and knowledge that I possess at the time of my creation."
timelets: (Default)
Social learning. Our species is the only one that voluntarily shares information: we learn a lot from our fellow humans through language.

“ In our brains, by contrast, the highest-level information, which reaches our consciousness, can be explicitly stated to others. Conscious knowledge comes with verbal reportability: whenever we understand something in a sufficiently perspicuous manner, a mental formula resonates in our language of thought, and we can use the words of language to report it. ”


One-trial learning. An extreme case of this efficiency is when we learn something new on a single trial. If I introduce a new verb, let’s say ”

“To learn is to succeed in inserting new knowledge into an existing network.”

--- Stanislas Dehaene. “How We Learn.”


The combination of the two modes of learning creates a cascading effect.

We can model this as a change of state, e.g. learning-by-doing by a pair of individuals, wherein the first one is doing, while the second one ("the soul") is learning. That is, TLRRH starts naive and dies while going to her grandma. At the same time, we, the observers, learn from her one-time misfortune and pass this learning to future generations.

Also related: fool me once, shame on you; fool me twice, shame on me.


timelets: (Default)
“And all learners benefit from focused attention, active engagement, error feedback, and a cycle of daily rehearsal and nightly consolidation—I call these factors the “four pillars” of learning, because, as we shall see, they lie at the foundation of the universal human learning algorithm present in all our brains, children and adults alike.”

--- Stanislas Dehaene. “How We Learn.”


Social networking is an extremely effective learning method for humans. Although, one can efficiently learn the wrong content through error reinforcement, rather than correction (e.g. the Danning Kruger effect).
timelets: (Default)

The exact sciences elude social analysis not because they are distant or separated from society, but because they revolutionize the very conception of society and of what it comprises. Pasteurism is an admirable example.

--- Bruno Latour. The Pasteurization of France, 1993. p 38.


AI seems to be moving in the same direction.
timelets: (Default)
wrt https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities

via https://avva.dreamwidth.org/3291499.html

Actually, I'm on the Yudkowsky side of this debate from a purely pragmatic risk management perspective – methods to evaluate AGI-related risks are practically non-existent. It's a problem.

In addition, from what I've seen so far, on the most basic physical level ML-native systems perceive, i.e. "feel", the world differently than humans. This particular divergence is only going to grow over time and, eventually, the species-level gap is going to be huge. Therefore, there's a big chance that an AGI system will not understand humans at all — along the lines of Nagel's What it's like to be a bat? We don't kill bats because we don't understand them but we have no way to guarantee the same for AGI. Yet. Hopefully.

Finally, the objection avva references in response to Yudkowsky' concerns is intellectually inept, which only highlights the depth of the problem.
timelets: (Default)
Самая главная вещь, которую я понял в разнообразных разговорах с антиваксерами — форма доказательства в медицине/фармакологии принципиально отличается от стандартных форм доказательств в математике, физике, биологии и компьютерах. Поэтому образованные специалисты в других областях могут не понимать, каким образом доказывается эффективность, например, вакцины и других лекарств. Более того, высокий уровень образованности или опыта в другой области может мешать человеку разобраться в ситуации, потому что он/а уверен/а в собственном знании форм научных доказательств.

Profile

timelets: (Default)
timelets

January 2026

S M T W T F S
     1 2 3
4 5 67 8 9 10
1112 13 14 15 16 17
18 19 20 21 2223 24
25 26 2728293031

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jan. 28th, 2026 07:23 am
Powered by Dreamwidth Studios