This AI 2027 prediction terrifies me (but we're just along for the ride)

By Dann Berg

Published or Updated on

Graph titled “Training Runs” showing the projected exponential growth in AI training compute requirements from December 2024 to December 2027. The chart displays five progressive AI models (Agent-0 through Agent-4) on a logarithmic scale, with compute measured in FP16 FLOP. Starting with Agent-0 at approximately 10^26 FLOP in late 2024, each successive agent requires significantly more computing power, with Agent-4 approaching 10^28 FLOP (equivalent to 1000x GPT-4) by the end of 2027. Reference lines for GPT-4, GPT-4.5, and 1000xGPT-4 are included for scale comparison. Photo from AI 2027

❗ This is an excerpt from the upcoming April 2025 edition of The Dann Chronicles newsletter. Usually, newsletter content isn’t published on my website (and vice versa) but I wanted to share this post in a less ephemeral way. If content like this is interesting to you, consider subscribing to my newsletter.


I’m in a strange headspace since reading this new prediction about AI. I think this feeling is…terror? I feel compelled to share this with a wider audience (which is what I’m doing here) while recognizing that sharing this report likely won’t change anything. There are literally only a handful of people who have the power to influence this prediction, and the rest of us are just along for the ride.

It’s called AI 2027, and it was created by a non-profit called the A.I. Futures Project. The project is led by Daniel Kokotajlo, a researcher with a compelling track record in AI:

  • In 2021 (before ChatGPT), he published What 2026 Looks Like. From our vantage point in 2025, he was largely correct—about everything—with what were crazy-sounding predictions at the time
  • A year later, OpenAI hired Kokotajlo to their policy team. It was a smart move: the company gained access to his continued research while limiting his ability to speculate publicly
  • In mid-2024, Kokotajlo and OpenAI had a dramatic split when he refused to sign a non-disparagement clause (which at the time threatened his $1.7M in vested equity, although this led to a policy change at OpenAI)
  • Shortly after leaving, he founded the A.I. Futures Project with other leading experts in the AI space, dedicated to forecasting the future of artificial intelligence
  • This month, they published their first release: AI 2027

The scenario is written as an extremely compelling fictional narrative (thanks to Scott Alexander of Astral Codex Ten). But it’s really the supplemental research that elevates this from cool story to research-based debate contribution.

Right now, the tech world is split into warring factions debating the future of AI while the rest of the population goes about their lives as normal. If you’re in the latter camp (dabbling with AI here and there but not following the news), you may be horrified to learn where we are today and what some of the brightest minds in the space think is right around the corner.

That’s because none of the scary predictions that gained wide audience attention after ChatGPT’s launch have yet been proven wrong. In fact, the pace has been accelerating even faster than most of those timelines predicted.

I recognize that doomsday predictions have been around since the beginning of time—divine intervention, natural disasters, Y2K computer collapse, astronomical events, or aliens. The human brain is programmed to latch onto such stories.

It’s possible that AI 2027 is another prophecy that will fail to materialize. I certainly hope so. But unlike those other scenarios, this prediction is based on extensive research and data, which makes me concerned that this one might just be the winner.

Supplemental reading: for anyone who gets sucked down this rabbit hole like I did, you might also enjoy this New York Times (gift) article about AI 2027 and Scott Alexander’s personal takeaways from the project.