<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dw="https://www.dreamwidth.org">
  <id>tag:dreamwidth.org,2016-12-25:2614584</id>
  <title>timelets</title>
  <subtitle>timelets</subtitle>
  <author>
    <name>timelets</name>
  </author>
  <link rel="alternate" type="text/html" href="https://timelets.dreamwidth.org/"/>
  <link rel="self" type="text/xml" href="https://timelets.dreamwidth.org/data/atom"/>
  <updated>2026-02-11T23:21:32Z</updated>
  <dw:journal username="timelets" type="personal"/>
  <entry>
    <id>tag:dreamwidth.org,2016-12-25:2614584:1668392</id>
    <link rel="alternate" type="text/html" href="https://timelets.dreamwidth.org/1668392.html"/>
    <link rel="self" type="text/xml" href="https://timelets.dreamwidth.org/data/atom/?itemid=1668392"/>
    <title>timelets @ 2026-02-11T15:15:00</title>
    <published>2026-02-11T23:21:32Z</published>
    <updated>2026-02-11T23:21:32Z</updated>
    <category term="atlantic"/>
    <category term="future"/>
    <category term="data"/>
    <category term="prediction"/>
    <category term="technology"/>
    <dw:security>public</dw:security>
    <dw:reply-count>0</dw:reply-count>
    <content type="html">&lt;blockquote&gt; Three times a year, the forecasting platform Metaculus hosts a tournament that is known to have especially difficult questions. It generally attracts the more serious forecasters, Ben Shindel, a materials scientist who ranked third among participants in a recent competition, told me. Last year, at its Summer Cup, a London-based start-up called Mantic entered an AI prediction engine.&lt;br /&gt;&lt;br /&gt;A few months later, the guesses from Mantic’s prediction engine and the other tournament participants were scored against the real-life outcomes and one another. The AI placed eighth out of more than 500 entrants, a new record for a bot.&lt;br /&gt;&lt;br /&gt;Mantic’s prediction engine combines a bunch of LLMs and assigns each one different tasks. One might serve as an expert on a database of election results. Another might be asked to scan weather data, economic outcomes, or box-office receipts, depending on the question that it’s attacking. The models work together as a team to generate a final prediction.&lt;br /&gt;&lt;br /&gt;On Metaculus, a group of forecasters has taken to estimating when AIs will have the chops to out-predict an elite team of humans. Last January, they said there was about a 75 percent chance this would happen by 2030. Now they think it’s more like 95 percent.&lt;br /&gt;&lt;br /&gt;&lt;a href="https://www.theatlantic.com/technology/2026/02/ai-prediction-human-forecasters/685955/"&gt;https://www.theatlantic.com/technology/2026/02/ai-prediction-human-forecasters/685955/&lt;/a&gt;&lt;br /&gt;&lt;/blockquote&gt;&lt;br /&gt;The feedback cycle is long, but the approach seems to be working nevertheless.&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=timelets&amp;ditemid=1668392" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
</feed>
