OpenAI rumours go nuclear: Trump, p(doom) and the truth about superintelligence
Is the world about to end? Are billions of people going to be fired? Or is OpenAI just planning to release a slightly better LLM?
OpenAI could be sitting on one of the biggest news stories in human story. Unless it isn't.
As America and the world braces for the inauguration of President Donald Trump, the rumour mill around OpenAI's progress towards developing artificial general intelligence (AGI) has shifted into overdrive.
If OpenAI really is in the process of birthing an artificial superintelligence (ASI) or general intelligence (AGI) as it keeps on hinting, then we're on the verge of one the most dramatic moments not just in the history of our species, but in the story of evolution itself: the arrival of a synthetic entity created by a natural lifeform (us).
Stoked by "leaks" from anonymous X accounts and whipped up by a constant barrage of highly suggestive, teasing social media posts from OpenAI staff, wild claims are now spreading that the world may have already witnessed the birth of a superintelligence behind closed doors.
Is that a good thing? It depends who you ask. Doomers are going to doom. AGI clearly poses an existential risk to humanity and raises the changes of a p(doom) catastrophe. Yet to growing numbers of AI evangelists, it also represents a saviour.
So what's the truth?
Has OpenAI achieved AGI, ASI and superintelligence?
Uncharacteristically, Sam Altman actually stepped forward to manage expectations and throw cold water on the rumours.
"Twitter hype is out of control again," he wrote on X. "We are not gonna deploy AGI next month, nor have we built it. We have some very cool stuff for you but pls chill and cut your expectations 100x!"
Although the mania around OpenAI's AGI has been building for some time, it began to bubble over after the New York Times reported that Altman would begin a "charm offensive" on January 30 in Washington and "discuss the future of AI development with lawmakers, economists and Trump administration officials and demonstrate new OpenAI technology that he believes will show the economic power of AI".
Axios claimed that this meeting could involve the discussion of PhD-level AI agents, reporting that "OpenAI staff have been telling friends they are both jazzed and spooked by recent progress".
Sam Altman: The AI industry's greatest hype man?
The current furore has also been fuelled by a series of incredibly hyperbolic statements from OpenAI staff ranging from Altman himself all the way down to people working on the front line of the revolution.
Altman said he believed a fast AI takeoff was now more likely than a slow march to AGI. A Reddit account under Altman's name even reportedly said AGI has been internally achieved - before amending this statement to clarify he was "memeing".
The OpenAI CEO has predicted that AGI would arrive during Trump's time in office and AI agents would enter the workforce in 2025.
Meanwhile, OpenAI staff have been writing X posts that add yet more fuel to the fire.
At the beginning of January, OpenAI agent safety researcher Stephen McAleer (@McaleerStephen) wrote: "I kinda miss doing AI research back when we didn't know how to create superintelligence."
Prominent AI commentators are distancing themselves from claims that AGI was imminent - whilst criticising Altman and his company's own role in stoking the hype.
The prominent X account Chubby (@kimmonismus) wrote: "Nobody I know assumed that it would be deployed next week. However, it is quite surprising that numerous OpenAI employees have repeatedly talked about “superintelligence” (“enslaved god”) in the last few weeks and the CEO himself writes in blog posts that the path for AGI is clear and ASI is within reach. In any case, one should not be surprised if a hype breaks out."
Redefining AGI financially, not philosophically
What's the best way to solve a problem? Redefine it.
Which is reportedly what OpenAI has done. It now reportedly defines AGI as a model capable of earning $100 billion in revenue, which is radically different from its previous five-stage understanding of AGI:
- Level 1: Chatbots – AI systems capable of conversational language and basic interactions.
- Level 2: Reasoners – AI with human-level problem-solving abilities.
- Level 3: Agents – Systems that can independently take actions and execute tasks.
- Level 4: Innovators – AI designed to assist in invention and creative processes.
- Level 5: Organisations – AI capable of performing the functions of an entire organisation.
So are we any closer to AGI or does an AI winter await?
Only time will tell. Follow Machine at the links below to stay up to date and check back later for updates to this story.
Have you got a story or insights to share? Get in touch and let us know.