Articles

The platypus no longer exists: The internet has just switched sides to the machines

By Dr. Tawhid CHTIOUI, Founding President of aivancity, the leading school for AI and data

A ten-year-old boy sits down in front of the screen. Curious, he types in a word he heard at school but can’t quite picture: platypus. He wants to see what this funny animal looks like—one his classmates have described as an impossible mix of a duck and a beaver. He expects to laugh, to be amazed, to discover the whimsy of nature.

But what appears on his tablet are dozens of contradictory images. Some are real, taken by wildlife photographers. Others are digital illusions, generated by artificial intelligence: a fluorescent platypus, a platypus with wings, a platypus dancing on two legs. For him, who doesn’t yet know the animal, the line between real and fake becomes blurred. His first encounter with the living takes place in a world where every other click leads to fiction.

This scene isn’t science fiction. It’s our reality. Because a silent shift has taken place: more than half of global internet traffic is generated by bots, and more than half of the text content hosted online is now produced or translated by AI. What was meant to be humanity’s library is becoming a gigantic labyrinth of artificial echoes.

The Internet was created to connect people, preserve their memories, and broaden their horizons. Today, it is becoming a space where machines and algorithms interact with one another, weaving a web that bears less and less resemblance to a mirror of the human world. The realization is staggering: the Internet is no longer predominantly human.

So a pressing and profound question arises: what will become of a world where we no longer know whether the first image we present to a child is that of an animal… or that of a digital fantasy?

Imagine an internet where the majority of visitors are no longer mere flesh-and-blood users, and where most articles, images, and videos no longer originate from the human mind. This science-fiction-worthy scenario is now a very real reality. In the space of just a few years, the Web has quietly shifted toward a dual phenomenon without precedent: more than half of global traffic is generated by computer bots, and an equally large share of online content is produced—or at least translated—by artificial intelligence. This reversal, which has gone almost unnoticed in everyday life, nevertheless marks a profound turning point in the history of the Internet—a pivotal moment when machines are taking control of the production and circulation of information.

For the first time since the creation of the web, traffic is dominated by non-human actors. According to the 2025 Bad Bot Report by cybersecurity firm Imperva, 51% of all global traffic in 2024 came from automated programs, compared to just 49% from us humans. This symbolic milestone marks an unprecedented shift: the web is now more

visited by machines rather than people. And the breakdown of the figures speaks volumes: of that 51%, nearly 37% can be attributed to “bad bots,” used for spam, fraud, or cyberattacks, while only 14% come from “beneficial” bots such as search engine crawlers. The digital frontier increasingly resembles a battlefield where swarms of invisible machines vie for every millisecond of bandwidth.

Alongside this automation of web traffic, another shift is taking place: online content production has, too, become overwhelmingly artificial. A study by Amazon Web Services published in 2024 estimates that approximately 57% of the text available on the internet is now generated by AI or automatically translated. In other words, more than half of all articles, posts, comments, and discussion threads are no longer the direct product of human authorship. Behind this figure lies an industrial mechanism: automated content farms, capable of producing thousands of texts in record time, are flooding the web with standardized, low-cost prose. On certain platforms such as Quora or Medium, more than a third of recent posts are already generated by AI. On social media, a study published in Nature estimates that approximately 20% of messages surrounding major global events now come from automated accounts. It has therefore become commonplace to engage in online debates with an algorithmic entity without even realizing it.

This flood has ushered in a new era in the information economy. The marginal cost of producing a text, image, or video is approaching zero: where journalists, photographers, or translators once had to be paid, AI can now generate hundreds of tailored pieces of content in a matter of seconds. Media outlets like BuzzFeed and CNET have already experimented with automated article generation, sometimes with disastrous results. In this model, abundance does not equate to richness: the more content is published, the less value each piece holds. This “commoditization” of information is turning the internet into a supermarket of interchangeable texts, at the expense of quality and uniqueness.

Beyond the economy, it is cultural diversity itself that is under threat. Since major language models rely primarily on English-language corpora, their outputs tend to standardize styles and erase local nuances. In many minority languages, most of the available content comes from machine translations from English, which homogenize cultural references. Behind the apparent proliferation of information lies an insidious homogenization: a Web with many voices that increasingly speaks with a single voice.

The world of science and knowledge is not spared. Predatory journals and databases are seeing an influx of thousands of AI-generated articles riddled with errors, fabricated references, or absurd phrasing. The risk is immense: that this fake knowledge will contaminate the academic corpus itself, which will then be used to train future models. It is a classic catch-22: by flooding the internet with artificial content, AI systems risk poisoning their own learning source and triggering what researchers call “model collapse”—a slow cognitive decline in which each generation of machines learns a little less from reality and a little more from its own illusions.

Added to this is an ecological paradox. Generating billions of artificial texts, images, and videos consumes energy, ties up server capacity, and puts a strain on massive data centers. Filling the internet with empty content comes at a high cost to the planet. An ocean of useless, yet very real, information

in their carbon footprint, strains infrastructure. Information overload is not just cognitive; it is also energy-intensive.

Finally, this transformation of the web is changing the way we relate to others. Interacting online now carries the risk of talking to a machine without realizing it. This uncertainty erodes trust: am I chatting with a friend, a stranger, or a well-trained bot? Some people even form emotional bonds with companion chatbots, blurring the line between genuine relationships and simulated interactions. “Connected loneliness” takes on a new meaning here: being surrounded by voices, but no longer knowing which ones are human.

This dual shift—traffic that is predominantly non-human and content that is overwhelmingly artificial—is not merely a technical evolution. It is a silent revolution. No dramatic announcement heralded it, no visible change signaled it. And yet, the landscape has been turned upside down. The internet, created to connect people, is becoming a crossroads where humans and machines interact in equal measure—a place where algorithms produce and consume information on a colossal scale.

In this hybrid web, the boundaries of reality are blurring. Information becomes a composite stream, inseparable from human contributions and synthetic content. For policymakers and citizens alike, the urgent need is to recognize this subtle yet profound shift: the Internet is no longer merely a global forum for human voices; it is also an autonomous ecosystem of artificial agents that publish, amplify, and sometimes manipulate the narrative of the world.

The internet is no longer just a mirror of our lives. It has become a stage where machines carry on half the conversations, and where their voices—sometimes helpful, sometimes toxic—now shape the global din.

Faced with this flood of artificial content and ubiquitous bots, major platforms were quick to respond. Google is constantly refining its algorithms to detect and de-index automated “content farms” that clutter search results. Meta deletes millions of fake accounts every day, while testing tools to label AI-generated content. YouTube has introduced a requirement to explicitly label manipulated or synthetic videos, under penalty of sanctions. The platforms, long accused of turning a blind eye, are now seeking to preserve a modicum of trust. But their response remains ambivalent: these same companies are investing heavily in generative AI, integrating the very tools that fuel the problem.

Governments, too, are beginning to enact legislation. In California, a law already requires that any bot interacting with humans for commercial or political purposes disclose its non-human nature. The European Union, through its AI Act, mandates that all artificially generated content—whether text, images, or deepfakes—must be clearly labeled. Governments are seeking to establish a “right to know”: internet users must be able to tell whether they are interacting with a human voice or a machine.

Private initiatives are attempting to fill in the gaps in this still-nascent framework. Some companies are advocating for digital ethics charters and labels such as “human content verified” intended to

to certify content that is genuinely human-generated. NGOs and citizen groups are exploring traceability methods by embedding invisible signatures in AI-generated files. But these efforts remain scattered, with no global standard.

For the challenges are immense. Technically, distinguishing between human-written text and AI-generated text is becoming increasingly difficult as models improve. Legally, disparities between regions complicate enforcement: a deepfake banned in Europe can circulate freely elsewhere. Politically, the temptation to over-censor is real: in the name of combating fakes, certain regimes could silence genuine dissenting voices. Finally, economically, it is unclear whether major platforms have any interest in curbing a flow of content that keeps users engaged, even at the expense of its quality.

Behind these attempts to push back lie far deeper questions that go beyond technical or legal regulation. What constitutes authentic content in the age of generative AI? If a text moves us, does it matter whether it was written by a human hand or by a machine: should we judge it to be “less true”? But if everything can be fabricated, what becomes of the trust that underpins social and political bonds?

The question of liability is no less daunting. When an AI system spreads misinformation, who is to blame? The model’s developer? The user who entered the query? The company hosting the service? As the production chain grows more complex, the concept of liability becomes blurred, threatening our ability to hold anyone accountable.

Finally, it is the very notion of collective intelligence that is faltering. The internet was supposed to embody the wisdom of the crowds, the accumulation of human contributions. What remains of this vision when the majority of online voices are artificial? Can we still speak of dialogue or public debate, or is it nothing more than a vast echo chamber where machines recycle their own output? This transformation risks deepening a new digital divide: between those who master these technologies and know how to leverage them, and those who are defenseless against them, drowning in a sea of conflicting signals.

We thought we were managing data flows, but now we’re facing an existential challenge: preserving the dignity of the human voice in a world where machines speak louder than we do.

It is not just the Internet that needs to be regulated, but the social contract it embodies. Between authenticity and illusion, between human memory and algorithmic noise, it is up to us to decide whether the Web will remain a space of shared truth or whether it will become the greatest shadow theater in history.

By 2040, searching for information might feel like stepping into an infinite library where nine out of ten books would have been written entirely by machines, compiling other machine-generated books, to the point of losing all sense of reality. On these overcrowded shelves, truly human writing would become rare, almost clandestine, like an original piece amidst a market of copies.

If we let this continue, the web will suffocate under this proliferation: too much fake content, too much noise, too much lost trust. Everyone will retreat into private, closed, filtered spaces where they’ll only interact with verified identities. In this future, the internet’s original promise as an open forum would be shattered.

But other paths exist. A hybrid internet is already taking shape, where humans and AI coexist openly—provided that transparency is enforced. Digital identity verification and “human content” labels could become seals of trust: knowing who is speaking, knowing where what we read comes from. A legal framework, if international and consistent, would help harmonize rules and avoid the blind spots of a fragmented world.

Technology itself can be harnessed as a counterbalance: developing built-in detection tools, training every citizen in digital critical thinking, and encouraging the creation of “authenticity bubbles” where human-generated content is highlighted and valued. In these spaces, words, images, and stories would hold value solely because they stem from lived experience. Authenticity could become the rare currency of the web, sought after like gold in the sand.

What remains is to rethink business models. As long as automated abundance is rewarded with advertising revenue, information pollution will prevail. Conversely, if platforms choose to reward originality, expertise, and traceability, then the internet can return to a virtuous cycle.

The future is not simply a choice between a stifled web, a hybrid web, a humanistic web, or a web of machines. These forces already coexist. Everything will depend on the decisions we make now: the overcrowded library or the salon of truth, the marketplace of illusions or the construction site of a new digital humanism. The Internet of 2040 will be nothing more than a reflection of the choices we make today.

We are witnessing a subtle yet monumental shift. For the first time in its history, the internet is no longer predominantly human: traffic is dominated by bots, and content by artificial intelligence. This network, created to connect minds, share knowledge, and preserve collective memory, is becoming a space where machines produce and consume their own echoes.

This dual observation is not merely a statistical footnote: it is a symbolic threshold. A shattered mirror. Humanity risks losing control of the tool it invented to express itself. The choice is clear: either submit to an automated web, saturated with noise and illusions, or invent a new digital humanism that restores the rarity and value of the human voice.

The choice is in our hands. For in twenty years, the question will be inescapable: will we still be reading about others—their joys, their anger, their stories—or only the echoes of machines feeding off one another, an artificial din in which our voices would be lost forever?


1. Imperva — 2025 Bad Bot Report: How AI is Supercharging the Bot Threat Provides key statistics on non-human traffic: 51% of global traffic in 2024 is generated by bots; 37% of total traffic by “bad bots.” (Imperva)

2. Digital Trends / AWS — 57% of the Internet may already be AI sludge A study showing that approximately 57% of online content (or phrases) is generated or translated by AI, primarily through machine translations in multiple languages. (Digital Trends)

3. Windows Central — “57% of online content is AI-generated or translated …” A summary of the AWS study, detailing the effects on search result quality, machine learning models, etc. (Windows Central)

4. ArXiv (academic paper) — A Shocking Amount of the Web is Machine Translated: Insights from Multi-Way Parallelism (Thompson, Dhaliwal, Frisch, Tobias Domhan, Marcello Federico) Presents the results of multilingual, parallel research and the implications for low-resource languages. (arXiv)

5. Ng, Lynnette Hui Xian & Carley, Kathleen M. (2025). A global comparison of social media bot and human characteristics. Scientific Reports, vol. 15, article no. 10973. “Chatter on social media about global events comes from 20% bots and 80% humans.”

Don't miss our upcoming articles!

Get the latest articles written by aivancity experts and professors delivered straight to your inbox.

We don't send spam! Please see our privacy policy for more information.

Don't miss our upcoming articles!

Get the latest articles written by aivancity experts and professors delivered straight to your inbox.

We don't send spam! Please see our privacy policy for more information.

Related posts
Articles

War in the Age of AI

When algorithms enter the fray

By Dr. Tawhid CHTIOUI, Founding President of aivancity School of AI & Data for Business & Society; selected by Keyrus as one of the 25 most influential global figures in the field of AI and data…
Articles

When AI Reaches the Level of Average Human Creativity: Schools and the Workplace Confront the End of a Comforting Myth

By Dr. Tawhid CHTIOUI, Founding President of aivancity, the leading school for AI and data A student submits a brilliant paper. The ideas flow smoothly, are well-structured, and are original without being confusing. The reasoning is coherent,…
Articles

2026: The surge in free AI courses from Microsoft, Google, Stanford, and MIT. Can we learn AI without learning about the world it is transforming?

By Dr. Tawhid CHTIOUI, Founding President of aivancity, the Leading School of AI and Data 1. The Comforting Illusion of Technical Training By the end of 2025, a strange consensus had taken hold. Faced with the sudden emergence…
The AI Clinic

Would you like to submit a project to the AI Clinic and work with our students?

Leave a comment

Your email address will not be published. Required fields are marked with *