Humanity faces "gradual disempowerment" rather than an AI apocalypse, researchers warn
Academics predict our species won't suffer an "abrupt takeover" by superintelligence but a slow lingering death from a thousand cuts.

"This is the way the world ends. Not with a bang but a whimper."
Those were the potentially prophetic words of T.S. Eliot in his poem The Hollow Men, which was published exactly a century ago and has an eerie resonance with the latest prediction about how AI is likely to wipe out humanity.
An international team of researchers from a number of organisations and universities in the UK, Canada and the Czech Republic has published a study which sets out a new theory about the "systemic existential risk" posed by AI.
The good news is that the academics do not think there will be an "abrupt takeover". Unfortunately, the bad news is that our species could face a long, slow process of "gradual disempowerment" at the hands of artificial superintelligence (ASI).
"AI risk scenarios usually portray a relatively sudden loss of human control, outmanoeuvring individual humans and human institutions, due to a sudden increase in AI capabilities, or a coordinated betrayal," they wrote. "However, we argue that even an incremental increase in AI capabilities, without any coordinated power-seeking, poses a substantial risk of eventual human disempowerment."
These small, distributed improvements to the power of AI could "undermine human influence over large-scale systems that society depends on, including the economy, culture, and nation-states," the researchers added.
Towards a post-human society

In the past, violent ideologies drove humans to kill and oppress one another. However, the political systems which drove murderous societies were still dependent on humans. If citizens withdrew their support, evil regimes collapsed.
"But if AI were to progressively displace human involvement in these systems, then even these fundamental limits would no longer be guaranteed," the team warned.
As AI replaces human labour and recognition, it has the potential to "weaken" human control mechanisms such as voting or consumer choice as well as societies' "alignment with human interests".
Gradually, civilisation will pivot away from functioning for the benefit of carbon-based homo sapiens to instead serve the preferences of silicon superintelligence, the researchers forecasted.
READ MORE: OpenAI boss Sam Altman confirms AI superintelligence will not "eat humans"
Over time, the effects of a society operated by machines will compound as AIs "aggressively" aim to achieve outcomes that are designed to improve the lives of computers rather than people.
This "incremental erosion of human influence" will be felt across a wide variety of interconnected domains, ranging from the economy to politics and even culture.
"We argue that this dynamic could lead to an effectively irreversible loss of human influence over crucial societal systems, precipitating an existential catastrophe through the permanent disempowerment of humanity," the academics wrote.
The academics' rather dark argument states that society has only been aligned with human interests because elites need real people to fight wars, extract resources, buy consumer goods and create the circumstances which allow the rich to live a good life.
They wrote: "Once this human participation gets displaced by more competitive machine alternatives, our institutions’ incentives for growth will be untethered from a need to ensure human flourishing."
Resistance is futile

Previous technological advancements like the Industrial Revolution changed the world of work by making humans more productive and liberating workers from dangerous, repetitive manual labour.
The same may not be true of AI, which could dramatically outperform humans across a wide variety of cognitive and physical domains.
In their paper, the researchers illustrated this point by discussing the invention of the calculator, which made it easy to perform arithmetic yet still required human input to perform meaningful tasks. AI can handle both the numbercrunching of a calculator and the reasoning once exclusively performed by humans.
It may therefore become a "superior substitute for human cognition across a broad spectrum of activities".
Once one company deploys AI, others will be forced to follow to remain competitive, creating a situation echoing the "tragedy of the commons" in which individuals acting in their own self-interest overuse and deplete a shared resource, ultimately harming everyone. Corporations that do not use AI will fall behind those that do, forcing all firms to adopt AI regardless of long-term societal consequences.
The authors warned: "Decision-makers at all levels will soon face pressures to reduce human involvement across labour markets, governance structures, cultural production, and even social interactions. Those who resist these pressures will eventually be displaced by those who do not."
READ MORE: Elon Musk shares "dangerous" Optimus plans, reveals "perverse" Telsa self-driving feature
People may even become stuck in culture wars between rival AI systems over issues they barely understand, egged on by machines that can manipulate humans in ways that Goebells and the many other evil propagandists of yore could only dream of.
Stopping the slow erosion of human control will be difficult, if not impossible, because the AI systems' influence will put pressure on a variety of intersecting societal systems that "bleed into the others" - sometimes literally.
The researchers wrote: "No one has a concrete plausible plan for stopping gradual human disempowerment and methods of aligning individual AI systems with their designers’ intentions are not sufficient. Because this disempowerment would be global and permanent, and because human flourishing requires substantial resources in global terms, it could plausibly lead to human extinction or similar outcomes."
They continued: "A distinctive feature of this challenge is that it may subvert our traditional mechanisms for course correction and cause types of harm we cannot easily conceptualise or even recognise in advance, potentially leaving us in a position from which it is impossible to recover."
Saving humanity from the rise of AI

So, how do you stop this nightmare from occurring? The authors set out a variety of strategies:
- Improve the function of democracy
- Ensure AI systems are understandable to humans
- Nominate AI delegates to advocate for human interests.
- Make institutions "robust to human obsolescence".
- Invest in tools for forecasting future outcomes to enable humans to better navigate the future.
- Carry out research to understand the relationship between humans and large multi-agent systems.
It remains to be seen whether these interventions will be effective. But if the researchers' predictions are in any way accurate, they must be enacted soon because the timeline for AGI is currently being radically shortened.
Although it seems unlikely that predictions that superintelligence can be achieved this year will come to pass, many industry insiders have claimed it is on the horizon or possibly even imminent. Sam Altman, OpenAI boss, believes it will be here "sooner than most people think".
Of course, this forecast may well be intended to sell ChatGPT subscriptions rather than convey a concrete vision of the future. Only time will tell.
So please don't have nightmares...
The authors of the report are Jan Kulveit of the ACS Research Group, Charles University, Czech Republic; Raymond Douglas, Telic Research, UK; Nora Ammann, Advanced Research + Invention Agency (ARIA), UK, ACS Research Group, CTS, Charles University, Czech Republic; Deger Turan, AI Objectives Institute; David Krueger, University of Montreal, Canada; David Duvenaud, University of Toronto, Canada.
Have you got a story or insights to share? Get in touch and let us know.