timelets: (Default)
Three times a year, the forecasting platform Metaculus hosts a tournament that is known to have especially difficult questions. It generally attracts the more serious forecasters, Ben Shindel, a materials scientist who ranked third among participants in a recent competition, told me. Last year, at its Summer Cup, a London-based start-up called Mantic entered an AI prediction engine.

A few months later, the guesses from Mantic’s prediction engine and the other tournament participants were scored against the real-life outcomes and one another. The AI placed eighth out of more than 500 entrants, a new record for a bot.

Mantic’s prediction engine combines a bunch of LLMs and assigns each one different tasks. One might serve as an expert on a database of election results. Another might be asked to scan weather data, economic outcomes, or box-office receipts, depending on the question that it’s attacking. The models work together as a team to generate a final prediction.

On Metaculus, a group of forecasters has taken to estimating when AIs will have the chops to out-predict an elite team of humans. Last January, they said there was about a 75 percent chance this would happen by 2030. Now they think it’s more like 95 percent.

https://www.theatlantic.com/technology/2026/02/ai-prediction-human-forecasters/685955/

The feedback cycle is long, but the approach seems to be working nevertheless.

Profile

timelets: (Default)
timelets

April 2026

S M T W T F S
    12 34
5 6 7 89 1011
121314 15 161718
19 2021222324 25
26 27282930  

Syndicate

RSS Atom

Most Popular Tags

Page Summary

Style Credit

Expand Cut Tags

No cut tags
Page generated Apr. 28th, 2026 10:26 pm
Powered by Dreamwidth Studios