I’ve got two words for you: Shrimp Jesus. If you don’t know what I’m talking about, it’s the infamous AI-generated Facebook image of Shrimp Jesus and other variations floating around the internet. That image first surfaced in March 2024 and appeared to be a meme at first glance. However, Shrimp Jesus was the jumping-off point for Facebook AI art slop. These consist of newly AI-generated memes sweeping the internet, such as the Challah Horse, the 386-year-old granny baking her own birthday cake and the random wooden cars, just to name a few. You might think these are just memes, but these images reignite discussions surrounding an old online conspiracy called the Dead Internet Theory, which began in 2021.
As someone who writes about the internet for a living, this was the first time I’d heard of this idea, and researching it led me down a bottomless rabbit hole from which I struggled to emerge. But if you frequently use TikTok, Instagram or Facebook, you might have unwittingly already seen examples online that echo this premise. So, what is the Dead Internet Theory, and how does it parallel the rise of artificial intelligence?
What is the Dead Internet Theory?
The Dead Internet Theory first emerged in 2021 on the online forums, 4chan and Wizardchan. People took to these forums claiming that the internet died in 2016 and that AI bots mostly run the content we now see online. This theory also supports the possibility that AI is being used to manipulate the public due to a much larger and sinister agenda. These posts were pieced together in a lengthy thread and published on another online forum called Agora Road’s Macintosh Cafe. Be aware, the thread can be easily accessed online, but I did not link to it due to the obscene language in the post.
User IlluminatiPirate wrote, “The internet feels empty and devoid of people. It is also devoid of content.”
Now, years later, this conspiracy is seeing the light of day again with a rise of TikTok creators dissecting the theory and finding examples to support it. One creator, with a username of SideMoneyTom, posted a video in March 2024, showing examples of different Facebook accounts posting variations of AI-generated images of Jesus. These images provide little traffic online, yet they can still easily proliferate your feed. Like many other online creators, SideMoneyTom echoed the same sentiment: These Facebook accounts are run by AI bots and create all content. To better understand this theory, it helps to know how generative AI works.
Generative AI uses artificial intelligence systems that produce new content in the form of stories, images, videos, music and even software code. According to Monetate, “Generative AI uses machine-learning algorithms and training data to generate new, plausibly human-passing content.” With the launch of ChatGPT in 2022, chatbots have become all the rage these days, with tech giants like Google, Apple and MetaAI creating a slew of AI tools for their products. (Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, the owner of ChatGPT, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Now, back to Shrimp Jesus. If you feed specific data and prompts to a chatbot, you’ll find that these images are “human-passing.” Emphasis on “passing.” Content created by chatbots is certainly known to have its faults.
“While large pre-trained systems such as LLMs [large language models] have made impressive advancements in their reasoning capabilities, more research is needed to guarantee correctness and depth of the reasoning performed by them,” AI experts wrote in a report by the Association for the Advancement of Artificial Intelligence.
However, Shrimp Jesus and other AI-generated images aren’t the only things online believers use to substantiate this theory.
Are these bots or real people?
If you spend enough time on social media, you’ll see odd things in the comments section of certain posts, like repetitive comments from accounts that are irrelevant to the post. These comments are often strange and don’t make sense. Last winter, Bluesky subscribers took to Reddit to complain about being plagued by reply bots that were politely and annoyingly argumentative.
One user flagged the common signs to spot these reply bots and what to do when encountering them. Some indications you’re experiencing a bot are when the account is new and has many replies to different posts, as seen from this Bluesky reply bot account.
According to Imperva’s 2024 Bad Bot Report, nearly half of all internet traffic came from bots in 2023, a 2% increase from the year prior. That report also highlights that the rapid adoption of generative AI and other LLMs has increased simple bots.
AI’s growth has accelerated in recent years, but so have the fears and concerns surrounding these changes. According to recent data from the Pew Research Center, AI experts were more likely than Americans to believe that AI will positively impact the US in the next 20 years. Data shows that over 47% of experts are excited about using AI daily, versus 11% of the public. That same report also highlights that over 51% of US adults have been concerned about the growth of AI since 2021.
Regarding the growing concerns over whether the internet is dead, Sofie Hvitved, technology futurist and senior advisor at the Copenhagen Institute of Future Studies, believes that the internet is not dead, but evolving.
“I think the internet, as it looks like now, will die, but it has been dying for a long time, in that sense,” Hvitved said.
“It’s transforming into something else and decomposing itself into a new thing, so we have to figure out how to make new solutions and better algorithms… making it better and more relevant to us as humans.”
In 2024, a NewsGuard audit report revealed that generative AI tools were used to spread Russian propaganda in over 3.6 million articles. NewsGuard also found that AI chatbots were used to create false narratives online from a Russian misinformation news site. To that point, Hvitved emphasized that these issues stemming from AI do not signify that the internet is dead but instead force us to address how we can improve these AI tools.
“Since there are large language models, and you know, AI feeds on all the information it can gather, it can start polluting the LLMs and pollute the data, which is a huge problem,” said Hvitved.
What does the online community think?
The Dead Internet Theory isn’t dying anytime soon, no pun intended. Online discourse surrounding this theory isn’t limited to the TikTok community but has also found a home on multiple Reddit threads.
One Reddit user wrote, “AI chatbots are going to be catastrophic for so many people’s mental health.”
Another posted, “Considering that we are just at the beginning of AI, especially its capabilities with video, I’d say there’s a real chance that it will destroy the usefulness of the internet and make it dead.”
Other people echo the same sentiment by adding that the ratio of AI content to human content will change dramatically over the next few years.
One even compiled a list of over 130 examples of subreddit threads on the internet, which consisted of comments and posts generated by AI bots.
Could AI shape a new digital internet culture?
One looming question following the Dead Internet Theory is whether AI will completely replace human-made content. If so, how will this shape internet culture?
Hvitved is also the Head of Media at the Copenhagen Institute for Future Studies and specializes in examining the relationship between emerging technologies like AI and their impact on communication. She has a take on the future of a new internet culture as AI use increases.
“Maybe the static element of the internet is going to die. So we have articles, static pages and web pages you must scroll through, but is that the death of the internet? I don’t think so.”
She believes this new internet culture could mean more relevant content for broadband users.
“That kind of contextual internet, knowledge graphs, real-time summaries and interactive microformats, that’s something these [AI] agents can go out and pick from to create something specialized for you.”
This new internet culture will emphasize AI’s ability to tailor unique content for each user and may mean abandoning the concept of shared spaces and communities.
“We have to pay attention to echo chambers or diving into your own little worlds that only you would understand. We won’t have any shared reality anymore,” Hvitved said.
So, is the internet really dead?
If you’ve watched films like The Terminator, Blade Runner or Wall-E, you know there’s always been a fascination with robots and whether they will take over the world one day. The resurgence of the Dead Internet Theory is just the latest evidence of that ongoing discourse. One could argue that AI shaping a new internet culture would mean the death of the internet as we know it. But this doesn’t imply that the internet will just disappear. To echo what AI expert Sofie Hvitved conveyed, the internet may eventually evolve into something new. With the rapid growth of AI in our day-to-day lives, there’s no question this is transforming the digital landscape. But is the internet dead? As a broadband writer working with numerous hard-working CNET writers daily, I can testify that it’s alive.
The Dead Internet Theory FAQs
What is the Dead Internet Theory?
The Dead Internet Theory emerged in 2021 from online conspiracy theorists on forums like 4chan and Wizardchan. It suggests that the internet died in 2016 and that the content we see online is run mainly by AI bots. The Dead internet Theory also suggests that AI is being used to manipulate the public due to a much larger and sinister agenda.
What are examples of the Dead Internet Theory?
TikTok creators note the increased number of Facebook bot accounts creating AI-generated images, with Shrimp Jesus and other variations of this image being the most infamous. This image also became the jumping point for Facebook AI art slop to spread online, with newly generated AI memes like the Challah Horse, the 386-year-old granny baking their own birthday cake, and the random wooden cars. In addition, followers also ascribe to this theory due to the spread of bot accounts filling the comment sections across different social media platforms.
What does generative AI mean?
Generative AI uses artificial intelligence systems to create new content, including stories, images, videos, music and software code. The way it works is you feed specific prompts and data to a chatbot, and it creates a particular output for you. Examples of generative AI include chatbots like ChatGPT, Preplexity, Google Gemini and Claude by Anthropic — a CNET Editors’ Choice for the best overall AI chatbot.
Read the full article here