Are we less than five years away from a potentially transformative technology on the scale of the internet? Prediction markets—platforms where users trade real money on the likelihood of real-world events— think so, and many experts agree.
A market on Kalshi, a federally regulated exchange in the U.S., sees around a two in three chance that OpenAI will announce it has achieved artificial general intelligence (AGI) by 2030. Meanwhile, Polymarket, a platform that runs on blockchain technology, not currently regulated in the US, gives a one in five chance that it happens this year.
OpenAI is just one competitor in the race, which also includes Meta, Google, Anthropic, and DeepSeek. But these forecasts should not be seen as prophecy: definitions of AGI vary, and skeptics argue that human-like reasoning in machines remains a distant goal.
AI vs. AGI: What's the difference?
To understand the difference between AI and AGI, compare a calculator (or a spreadsheet program) to a mathematician. The machine will outdo the human in any defined mathematical task from multiplying large numbers to computing standard deviations, but it can’t think creatively or critically on a more nuanced problem, nor can it decide what problems to tackle. If the task is, say, “optimize shipping routes for a national grocery chain,” the program is a helpful tool that the human can use, but it can’t do the entire problem from beginning to end any more than an Allen wrench can assemble your Ikea furniture.
AGI, in theory, can. It can take on nuanced, ambiguous tasks at least as well as a human while leveraging the computing power of modern machines. One critical difference is that, given a broad, ambiguous task, an AGI can make decisions about what subtasks within the overall mission to take on and solve. It’s still a tool, but one that takes on more of the high-level cognitive work of human beings. Those tracking the progress of AI in fields like coding, medical research, and human interaction regularly state that we’re getting closer to this technology all the time.
How close? According to Anthropic CEO Dario Amodei, very.
2026? 2030? Never?
“My view, at almost every stage [of AI] up to the last few months, has been we’re in this awkward space where in a few years we could have these models that do everything humans do, and they totally turn the economy and what it means to be human upside down, or the trend could stop and all of it could sound completely silly,” Amodei said on the New York Times’ Hard Fork podcast.
“I’ve now probably increased my confidence … [to] more like 70 percent and 80 percent and less like 40 percent or 50 percent … that we’ll get a very large number of AI systems that are much smarter than humans at almost everything before the end of the decade, and my guess is 2026 or 2027.”
Anthropic has gone as far as recommending domestic and international policies to prepare for machines having “Intellectual capabilities matching or exceeding that of Nobel Prize winners across most disciplines—including biology, computer science, mathematics, and engineering” among other sci-fi-sounding traits.
Predictors have also changed their tune since last fall, with the market on OpenAI announcing AGI this decade rising from the low-30s in October to around twice that today. A range of prediction markets show similar forecasts. Users on Metaculus see AGI happening by mid-2030. Those on Manifold see it as roughly a coin flip that it arrives before 2029. Polymarket users are a little more optimistic than those on Kalshi on OpenAI making an AGI announcement this year.
Across the board, people with money on the line have grown more confident that we’ll see AGI in the next five years.
What changed?
An arms race
OpenAI has rolled out several new products in the last six months, and they weren’t alone. The nonprofit-governed company put out “Operator” in January, which can browse the web by itself to perform tasks. It followed that with GPT 4.5 at the end of February, the latest edition of its signature technology, known primarily for ChatGPT.
The company said the latest version should be more accurate and noted that its “broader knowledge base, improved ability to follow user intent, and greater ‘EQ’ make it useful for tasks like improving writing, programming, and solving practical problems.”
In December, Google rolled out Gemini 2.0, which it said will “bring us closer to our vision of a universal assistant.”
The next month, the Chinese company DeepSeek rattled the AI market with its release of a cost-effective AI program, and recent reports indicate that the next version could come as soon as May. For this year, predictors see OpenAI as likely to release an open source model and GPT 5.0’s release as a near-certainty.
Meta is expected to join the fray soon with the release of its standalone AI program.
What is clear is that many of the world’s biggest tech companies are in an arms race to create more impressive and useful AI programs. What’s less clear is what that will amount to. Amodei’s perspective is becoming more and more common, but there are dissenters such as cognitive scientist Gary Marcus, who sees AGI as decades, not years away.
Breakthrough or bottleneck?
“I think there is almost zero chance that artificial general intelligence … will arrive in the next two to three years, especially given how disappointing GPT 4.5 turned out to be,” he wrote in a recent Substack post.
Marcus sees a raft of challenges, such as AI’s ability to work through abstractions and understand a wide range of syntax and semantics, where the technology is still falling short, despite enormous investments. To his point, every AI system on the market is still clunky in some ways, regularly “hallucinating” falsehoods and presenting them as true and making bizarre errors that no human being would. Casey Newton of Platformer wrote that he asked Operator to order him groceries and it immediately began filing an Instacart order … in Des Moines, Iowa (he lives in California).
Many of these quirks will get ironed out over time, but Marcus also raises an even more troubling possibility for the industry: that large language models (LLMs), the model behind leading AI programs such as ChatGPT, are not a path that will lead to AGI. If he’s right, there are billions of dollars of fuel going into a vehicle headed the wrong direction.
Then again, the achievement of AGI will ultimately be in the eye of the beholder. Or the press release. The Kalshi market mentioned above does not attempt to define AGI by a specific benchmark, just the announcement of its existence. The first declaration of AGI will undoubtedly be met with probing and skepticism, as well as similar boasts from competitors as soon as they can make them.
After all, these companies eventually want a return on the billions they are pouring into AI. What they produce will ultimately be geared toward commercial uses of the technology, whether or not it’s called AGI.
For now, AGI remains a sci-fi concept. Prediction markets think it won’t stay that way for much longer. If they’re right, we could be on the cusp of a society-transforming technology.