Is OpenAI boss Sam Altman a victim of the dead internet - or the person who killed it?

ChatGPT creator hints at support for wild theory which claims the web died as bots replaced organic human content with AI slop.

Is OpenAI boss Sam Altman a victim of the dead internet - or the person who killed it?
(Image by Kathryn Conrad / https://betterimagesofai.org / Creative Commons 4.0)

Once upon a time, the internet was teeming with life and riding high on lingering 1990s optimism.

Then came OnlyFans, culture wars, surveillance capitalism, opaque but all-powerful social media algorithms and all the rest of the digital nasties which have turned the online happy place of old into the angry, divided web of today.

One of the most vivid hypotheses about what happened is called the dead internet, which states that sometime around 2016, bots and automated content farms began to replace all the real online content with synthetic slop.

The more paranoid versions of this thesis suggest that shadowy figures from the US government have been quietly manipulating online content to control what we see and choke out subversive voices.

Whether you believe those claims or not (and they do seem rather unlikely when the typical level of public sector competence is taken into account), they don't seem so wild today after ChatGPT introduced the world to GenerativeAI (GenAI).

Now the man who led the development of this globe-bestriding chatbot has hinted that he may be a supporter of the theory - even though his creation has arguably done more to pollute the commons with AI sludge than any other public-facing app in human history.

On X, he wrote: "I never took the dead internet theory that seriously but it seems like there are really a lot of LLM-run Twitter accounts now."

What is the dead internet theory?

Earlier this year, a paper by academics from several American universities stated that the homogenisation of social media messaging originally sparked fears that the internet had perished.

"The dead internet theory emerged as a response to the perceived homogenization of online spaces, highlighting issues like the proliferation of bots, algorithmically generated content, and the prioritisation of engagement metrics over genuine user interaction," the academics wrote

"AI technologies play a central role in this phenomenon, as social media platforms increasingly use algorithms and machine learning to curate content, drive engagement, and maximise advertising revenue.

"While these tools enhance scalability and personalisation, they also prioritise virality and consumption over authentic communication, contributing to the erosion of trust, the loss of content diversity, and a dehumanised internet experience.

"The commodification of content consumption for revenue has taken precedence over meaningful human connectivity."

How much of the internet is synthetic AI slop?

Clearly, it's difficult to know how much of the internet's content is generated by machines and how much is human-made.

In the spirit of the times, we asked ChatGPT to let us know what's to blame for the zombification of the web.

It admitted to "fueling the flood" and confessed to being "a key engine behind the massive surge in AI-generated web content".

"Most of the web’s new pages now use AI, and up to 40% of existing pages contain AI-written text," it wrote.

"In news and misinformation, AI content has exploded, especially on fringe or low-quality sites."

For the record, Machine is very much alive and does not AI for anything but images and the odd bit of proofreading.

Elsewhere, AI is taking over.

READ MORE: "It really hurts!": Developers claim ChatGPT has been misgendering them

A recent study found that at least 30% of text on active web pages is artificially generated, with the "actual proportion likely approaching 40%".

This is creating "autophagous loops" and "AI cannibalism" as models feast on the nonsense generated by their peers and spew more gibberish, which is then regurgitated once again by another model. Autophagy, lest ye forget, is the process by which the body "eats" dead and damaged cells or their faulty constituent parts.

"Autophagous loops lead to linguistic entropy and will be detrimental with regard to factual information, in particular where the content of web pages purveys extreme positions or where the web has been flooded with similar messaging by malevolent actors trying to bias public perception," author Dirk HR Spennemann warned.

Now, we wouldn't dare question whether the grandly named Mr Spennemann is real. His Wikipedia page tells us he is an Associate Professor in Cultural Heritage Management at the School of Agriculture, Environmental, and Veterinary Sciences in Charles Sturt University in Albury, New South Wales, Australia.

It has been estimated that just under 5% of articles published on Wikipedia are AI-written - so, with the greatest apologies to Dirk Spennemann, you can forgive us for our mild scepticism - even when it's misplaced.

We say all this because it points to a truth. In 2025 and possibly forever more unless we think of a smart way to prove human provenance, there will always be question marks around the genuine humanity of online content.

The X factor: Bots, zombies and political partisans

When it comes to X, the picture is as synthetic as you might expect. A study analysing tweets, sorry, X posts about the US Presidential Election found that approximately 12% of shared images were AI-generated and around 10% of users were "superspreaders" responsible for sharing 80% of these fake images.

These folks are "more likely to be X Premium subscribers, have a right-leaning orientation, and exhibit automated behaviour", the authors claimed.

It is worth remembering that pasteurisation was originally left-wing in nature, spearheaded by activists at certain social networks, whom we will not name for legal reasons.

READ MORE: ChatGPT will call the cops on its most dangerous users, OpenAI announces

These sensitive folks turned the internet into a weapon for silencing and cancelling political opponents, using the dragnet designed to censor content they considered offensive to also wipe out political speech.

Now the pendulum has swung the other way in certain quarters.

Under the stewardship of Elon Musk, X has become a space that is more supportive of free speech and does not censor (and may even promote) starkly right-wing content.

Political partisans only ever blame their opponents for all the horrors of the world.

In truth, both sides are responsible.

Ctrl, Altman, delete

So back to the founding question of this article. Is it a little bit rich for Sam Altman to complain about the dead internet if you agree that ChatGPT has done so much to kill it?

The answer is nuanced. It's not just technology that has sterilised and polarised digital content - humans have done it to themselves.

A recent example of this is the ongoing discussion on LinkedIn about the em-dash- a long dash that ChatGPT uses so prolifically that it has become a telltale sign of AI-written content. Read all about it at the link a few paragraphs down.

So many people wrote about this topic in such similar ways that it started to make me think that humans are not so different from bots after all.

We are fed training material in our youth, subsequently gorge ourselves on the slop produced by our social or political tribe and then remix it in conversation and social media posts whilst pretending that what we're saying is original.

When someone writes about the em dash, they are displaying group identity rather than a coherent, original thought. And this process seems automated and hard-coded. If you are interested in AI and writing, you must talk about the em dash.

READ MORE: OpenAI boss Sam Altman vows to fix ChatGPT's em-dash addiction (and finally end LinkedIn's "is this AI writing" debate)

We humans are frighteningly predictable, conservative and prone to groupthink.

Frankly, most of our online utterances could have been generated by a bot given a prompt that reads something like: "Write a post which reflects what the leaders of my group are thinking about, give it an emotional resonance which chimes with my fellow travellers and make sure it has a stylistic formulation that has proved successful for other people to ensure virality."

Sam Altman and ChatGPT are not responsible for the dead internet - although they certainly delivered a few swift kicks when it was down.

We did it collectively. Yes, the web of the 1990s was optimistic. But it was also sparsely populated by pioneers, free-thinkers and people who were more likely to be independently-minded. Such is the nature of life at the frontier.

Now we're all online and I'm sorry to say that originality is not a universal human trait.

Who killed the internet?

As a tabloid newspaper might say: It was us wot done it.

Do you have a story or insights to share? Get in touch and let us know. 

Follow Machine on LinkedIn