Elon Musk makes frightening AI p(doom) apocalypse prediction

The doomer-in-chief is not at all confident that our species will survive the rise of the machines.

Grok's depiction of Elon Musk fighting killer robots in an apocalyptic nightware world
Grok's depiction of Elon Musk fighting killer robots in an apocalyptic nightware world

Before he was President Donald Trump's ally and collaborator, Elon Musk was mostly known for electric cars, sending rockets into space and... warning that humanity was very likely to be destroyed by artifical intelligence.

Now the billionaire has returned to his previous incarnation as the world's top doomer by issuing a pretty grim warning about the future of our species in the AI age.

Musk appeared on Senator Ted Cruz's podcast this week to discuss DOGE and the first 50 days of the Trump administration, which have seen left-wing activists burning Telsas across the US in protest against the new government. Their conversation started out positively before quickly turning to the topic on everyone's lips right now: AI doom.

Cruz asked: "If AI becomes smarter than any person, how many jobs will disappear as a result? And what will people do if millions lose their jobs in this way? A lot of people are understandably freaked out."

"Goods and services will become close to free, so it’s not as though people will be wanting," Musk replied. "You'll have - I don’t know - tens of billions of robots that will make anything or provide any service you want for basically next to nothing.

"It’s not that people will have a lower standard of living; they will actually have a much higher standard of living. The challenge will be fulfilment—how do you derive fulfilment and meaning in life?"

READ MORE: OpenAI: Deep Research could soon help to develop bioweapons and possibly nukes

Then came the inevitable question.

"Is Skynet real?" Cruz asked. "You get the apocalyptic visions of AI. How real is the prospect of killer robots annihilating humanity?"

"20% likely, maybe 10%, [in] five to 10 years", the billionaire replied before delivering the silver lining.

"You could look at it like the glass is 80–90% full - meaning there’s an 80% likelihood we will have extreme prosperity for all," he added.

Musk also predicted that America will "win" in AI development in the short term, beating China and other competitors. After that, success will "be a function of who controls the AI chip fabrication factories."

Why does Elon Musk fear AI will destroy humanity?

Elon Musk seen in a cloud of smoke during a Joe Rogan show appearance
Elon Musk seen in a cloud of smoke during a Joe Rogan show appearance

Musk is one of the world's most prominent doomers - famously describing his attempts to warn politicians about the threat during an appearance on the Joe Rogan.

"I tried to convince people to slow down AI, to regulate AI, but this was futile," Musk said during the show, when he was seen puffing on what was claimed to be a massive spliff.

"I tried for years. Nobody listened."

I was the first tech reporter to report details of a conversation between Musk and Google founder Larry Page, who allegedly accused the Tesla founder of being "speciesist" because of his concerns that machines will obliterate our species. But this was far from the first time that Elon had dealt with the topic.

In 2014 during an interview at the MIT AeroAstro Centennial Symposium, Musk described AI as “our biggest existential threat” suggesting that humanity is “summoning the demon” by developing technology that's more intelligent than its creators. He also expressed concern that without regulatory oversight, AI could evolve beyond human control, leading to potentially apocalyptic unforeseen consequences.

In 2017, Musk addressed the National Governors Association, reiterating his stance that AI poses a “fundamental existential risk for human civilization.” He advocated for proactive government intervention, emphasizing that waiting for adverse events before implementing regulations would be too late. (NPR)

READ MORE: Humans risk losing control of lying, cheating and power-crazed AI models

In 2023, he signed an open letter calling for a six-month pause on developing AI systems more powerful than GPT-4, citing profound risks to society.

Musk co-founded OpenAI in 2015 to promote and develop friendly AI for the benefit of humanity. However, he later criticized the organization for becoming closed-source and profit-driven, diverging from its original mission. This led to legal disputes, with Musk alleging that OpenAI’s shift in direction betrayed its foundational principles.

Musk is also reportedly a big fan of Terminator 2: Judgment Day, watching the famous AI apocalypse movie more than seven times.

In 2023, Musk launched xAI, an AI startup that aims to build a “pro-humanity” system. He said the world needed to worry about a “Terminator future” now in order to avoid the most apocalyptic AI scenarios and p(doom).

Have you got a story or insights to share? Get in touch and let us know. 

Follow Machine on XBlueSky and LinkedIn