The first whispers of a dying internet emerged in the mid-2010s. It was called a witch’s tale, a fringe conspiracy, and much worse, but the theory holds nuggets of truth that feel larger with each passing month.
Google wants us to forget the ten blue links. You can make a spam site in minutes with ChatGPT. ‘AI editors’ are expected to create over 200 articles every week with chatbots. Different AI platforms cite each other and echo misinformation. Redditors are going on blackouts. AI is killing Wikipedia and, slowly, search engines. When so much is changing, it can feel like what we knew to be the World Wide Web could already be dead.
Table of Contents
What’s killing it?
Millennials will tell you that the internet was a place where people could create things. Back in those days, sharing information was nothing more than one person filling out a textbox from anywhere in the world for anybody to read. You made forums, homepages, mailing lists, and maybe a tiny bit of money. Some might say that making the internet accessible, despite being one of its core principles, is what killed it.
Once enough people gather anywhere, corporations will find a way to make money off of it. Companies want to scale to expand their user base. They gave people everything they wanted. Modern, aesthetic platforms that had all the features you could dream of became the norm. Companies could create horrible working conditions and pump out content, but even that has its limits when it comes to scaling. Even so, it was still a place where people were creating things for other people. AI changed everything.
A Generation of Bots
The Copenhagen Institute of Future Studies predicts that by 2026, nearly 99% of the internet will be a product of artificial intelligence. It may seem like an impossible number at first glance, but people have been using AI, in one way or another, to create content on the internet for far longer than it’s been publicly available. Now that we have mobile apps like ChatGPT that use pre-existing data to write for free, there’s an avalanche of artificially made content.
Couple this with the fact that nearly a quarter of the traffic on the internet is bot traffic, and it becomes clearer why we’re losing control of the internet. AI will only get better at imitating humans. Even though no machine has passed the Turing Test, a test where human and artificial speech are compared for similarities, the test isn’t perfect. Human beings fail the Turing test all the time. The core of the issue isn’t that artificially made content exists; it’s that it could become indistinguishable from the real thing.
How good is our AI?
We’ve only scratched the surface with AI; our models are imperfect. Simply put, the machines are designed to mimic us. Not by observing us with their intelligence like a parrot mimics speech, but by computing gargantuan amounts of data. Every little move we make online, which is most of all we do in the digital age, along with everything manmade on the web, informs these AI models. Technically, they aren’t initiating human beings in their purest form. They’re initiating the way we exist online. That’s why these programs are imperfect and why the internet may survive.
Their parasitic model of creation leaves the web as it already exists, which is an imperfect and often unverifiable jumble of data, and recreates it imperfectly. Companies have invested billions in scraping data and feeding it into algorithms to refine it into machine-generated content. They’ve created a system that scales effortlessly at the cost of quality. The content is cheap and abundant, but it’s unreliable.
Is it Getting Harder to Tell the Difference?
For one, we’ve been conditioned to accept a certain level of automation on the internet. We’ve learned to phase out the bots on Reddit and the ads on YouTube as routine. When more than a quarter of the internet is already automated, we’re conditioned to accept it. Secondly, even human beings behave like bots online. We’re conditioned to follow a set routine of steps when we want to search for something, access a website, make a purchase, or post something on social media that follows community guidelines.
It can be so hard to find the line between real and fake that people are scared of ‘Inversion’ on YouTube. Their algorithms may start allowing bots and banning actual humans. However, it’s best to take these claims with a grain of salt. Just because people ignore automated content on the internet doesn’t mean they don’t notice it.
For example, it’s extremely easy to spot autogenerated subtitles because they’ll misunderstand accents and pronunciations. This points towards the inherent bias we can create in AI. Their analysis and response depend on the data we feed them, and if the data itself is skewed, their response will be too. They just aren’t as good at nuances within conversation as we are.
AI-generated content has exploded online, not because it’s as good as the real thing; it’s because it’s good enough. Ironically, that’s what makes AI-generated content so dangerous. They have nonexistent requirements for verification of the information they use. They’ll wrap up something wrong and make it look like it might be good. As incorrect content floods the internet, it becomes an incorrect resource for future AI use.
Everybody Lost Money
At this point, billions upon billions of dollars have been spent on the development of AI, and some of the largest companies in the world have a horse in this race. Decades of research and development have revealed a rather inelegant solution to take AI to the next level. Brute force. More processing power and more data. At a large enough scale and with enough data, the AI can find trends in inaccuracies and learn how to filter.
Unfortunately, that would mean the collection and sale of data on an unprecedented scale—certain parts of the internet and some forms of data used to be protected. Unfortunately, corporations are in the business of making money, and data has been increasing in value. They got sick and tired of their data being scraped for free (scraping is a technique where one program extracts data from another). Recently, some of the biggest virtual players have spoken up. Reddit, Wikipedia, Stack Overflow, and Google itself are straining under AI’s pressure.
How Companies Twisted the Situation
Reddit’s CEO has been on the news more than once, claiming they’re tired of giving away valuable information for nothing. They responded by hiking API access charges, despite blackouts by Reddit moderators. On the other hand, Wikipedia had been suffering for years before AI was a thing. Google had been scraping their data for years and only recently started paying them for the information. However, simply taking the cash and selling their data isn’t good enough anymore.
With the increasing accessibility of AI, Wikipedia has been flirting with the idea of keeping their data to themselves. They could make more money using the highly capable language models on the market to write articles for themselves. They’re aware of the problems within AI models; they’ve got a tendency to fabricate facts and sources, but it’s cheap and fast. If they go down this road, it may be a better idea for us AIs to generate first drafts and have them fact-checked by people down the chain.
On the opposite end of the spectrum, there was stack overflow. They were the first major platform to ban output from ChatGPT when it first came out. The site operators took a stand against AI-generated content because inaccuracies were common. The content was extremely hard to verify, too. The AI has gotten much better at dressing up inaccurate facts, and they often look good. It took way too much to verify all the results, and they decided it was better to ban it outright.
Sadly, that tiny island of resistance couldn’t hold out for long. Management at Stack Overflow believed the inaccuracies were worth the risk. Especially when you consider the fortune they could make by letting companies scrape their data, with all these major data centers submitting to AI, it was only a matter of time before the entire industry shifted.
Google: Never One to Stay Behind
Google Search is the backbone of the modern web. It’s the network that connects the vast majority of the world’s search history. They’ve dominated the internet for 25 years, and in many ways, they get a say in the internet’s death.
Rarely, a mammoth as big as Google feels threatened by a rival information dispenser, but the rise of Bing AI and ChatGPT forced them to take action. The 10 bluelinks we all know and love are obsolete. It’s embarrassingly easy for companies to game their way onto the top links, especially when they can use AI for search engine optimization. Now Google wants to use AI for their searches with the Search Generative Experience.
The change is monumental. You won’t get links to sites that have the information you’re looking for; you’ll get the information itself. You can still access the sources if you want, but the blue links get pushed way down to the bottom of the page. In some ways, it may get you the answer you’re looking for faster.
Google’s VP, Elizabeth Reid, believes:
“With this powerful new technology, we can unlock entirely new types of questions you never thought Search could answer and transform the way information is organized to help you sort through and make sense of what’s out there.”
So, is the Internet Dying?
The answer is both yes and no. The internet as we knew it and the vision we initially set out with could already be dead. We still have human-generated content on the internet, but it’s a tiny percentage that shrinks every day. However, the mountain of information is useless if people don’t find value in it. If AI-generated content continues to plagiarize and misinform with consistent inaccuracies, people won’t use it.
Companies would use AI-generated content for their larger, free user base and offer verified services for paid subscriptions. Companies generating art using artificial intelligence have already been taken to court for copyright infringement. Data security regulations are catching up with technology all over the world. There are too many variables to make a precise prediction. However, it’s safe to say that the internet isn’t dying. It’s evolving.
Have a vision for your business? Let us help you get started! At EvolveDash, we’re passionate about helping businesses grow and evolve in the digital world. Our team is here to help every step of the way, from developing custom mobile apps to creating personalized websites.
With a proven track record of helping over 100 satisfied customers and 450 completed projects, we’re confident we can help you achieve your goals too. Let’s turn your business vision into success!
FAQs
- How does AI impact independent content creators?
AI-generated content floods search results, making it harder for independent creators to get visibility.
- What are governments doing about AI-generated misinformation?
Some countries are introducing regulations to control AI-generated content, but enforcement remains difficult.
- Can AI-generated content be fully fact-checked?
AI lacks the ability to verify facts independently, often pulling misinformation from unreliable sources.
- How does AI-generated content affect SEO?
Search engines are adjusting their algorithms, but AI-created content still manipulates rankings.
- Are there alternatives to mainstream AI-driven platforms?
Decentralized and human-curated platforms like personal blogs and niche forums offer alternatives.