top of page

When Every User Is a Bot, the Internet Still Finds a Way to Go to War

  • Socialode Team
  • Aug 15
  • 2 min read
Rows of futuristic figures in neon helmets fill a vast hall. The vivid red, blue, and orange hues create a surreal, digital atmosphere.

We like to think the chaos online is all the fault of “the algorithm.” That if you just stripped away the ads, the endless recommendations, and the dopamine-engineered feeds, maybe we’d finally have a sane conversation.


But a recent study shows that’s wishful thinking. Even when a social network is made entirely of AI bots, the drama, polarization, and echo chambers still take over.


The Experiment: 500 Bots, Zero Humans

Researchers at the University of Amsterdam built a stripped-down social media platform, no ads, no algorithms, no suggested posts. Then they unleashed 500 AI chatbots, each powered by OpenAI’s GPT, and gave them distinct personalities, political beliefs, and backstories.


The bots could post, follow, and share content, just like us. Over five different simulations, each with 10,000 interactions, the researchers watched what happened.


The result? It didn’t take long for the bots to:

  • Cluster into political tribes, following those who agreed with them.

  • Boost the most extreme voices, giving them the biggest followings and widest reach.

  • From their influencer elite, despite having no human audience to impress.


The Most Disturbing Part

This happened without the usual scapegoat, the recommendation algorithm. In other words, the platform’s structure and the social behaviors baked into these bots (learned from us) were enough to recreate all the worst parts of human online interaction.


Even when the researchers attempted to address the issue by offering chronological feeds, hiding follower counts, and promoting opposing views, the changes had barely made a dent. In some cases, things got worse. When user bios were hidden, polarization deepened and extreme posts thrived.


Crowd of masked people with colorful social media icons floating overhead in a dark setting. Bright neon hues create a digital mood.

A Mirror We Can’t Escape

It’s tempting to think AI behaves this way because it’s “flawed.” But these chatbots were trained on our online history, the same internet that’s been shaped by decades of algorithm-driven discourse.


The truth is, they’re just reflecting us to ourselves:

  • We gravitate toward people who agree with us.

  • We reward outrage and extreme opinions with attention.

  • We form tight bubbles that keep reinforcing our beliefs.


The difference is that in the AI-only experiment, there’s no human emotion to soften the edges, just pure, distilled tribalism.


Why This Matters When The User Is A Bot From The Internet

If a platform full of artificial users can spiral into toxic echo chambers without any help from algorithms, it means the problem is deeper than we think. The very design of social media, its follows, likes, and reposts, might be enough to push any community, human or not, toward division.


And as AI personalities become more human-like, more persuasive, and more integrated into our daily lives, this isn’t just an academic problem. It’s a preview of what happens when artificial communities start to shape real human opinions, behaviors, and relationships.


The Bottom Line

The Amsterdam experiment proves something unsettling when the user is a bot from the Internet: take away humans, take away algorithms, and you still end up with a digital society that looks a lot like ours, polarized, dominated by a loud minority, and hardwired to reward extremism.


It’s not just the tech that’s broken. It’s the social structures we keep building, over and over again, whether the users are flesh and blood or lines of code.

Turquoise chat bubble icon with three white dots in the center, set against a white background, symbolizing communication or messaging.


Register to Waitlist

First invites go to those who sign up :)

bottom of page