BLOG | OFFICE OF THE CTO

The New Bot War: Everyone You Know Might Be Fake

Lori MacVittie Thumbnail
Lori MacVittie
Published May 13, 2025

Bot farms used to be a joke. A few thousand spam accounts, broken English, crude engagement tactics. Easy to spot. Easy to dismiss.

Not anymore.

Today, bot farms operate at industrial scale, deploying thousands of real smartphones to run scripted accounts that behave like real users. They like, share, and comment just enough to trigger platform engagement algorithms.

It’s not hacking. It’s using the system exactly as designed, only faster, at scale, and without the authenticity those systems were meant to assume.

Once a post gets traction, platforms like X and Meta boost it further. They amplify engagement, not accuracy. X’s 2023 transparency report makes it clear: what moves gets promoted. Even with ML-based detection systems in place, AI-driven bots blend seamlessly into organic traffic.

From there, real users take over. Visibility creates perceived legitimacy. If something looks popular, it feels trustworthy.

Fake engagement creates the illusion. Real people build the fire. And AI makes that fire harder than ever to trace.

The impact of AI

Where bot farms once needed armies of workers pushing repetitive posts, AI tools can now generate coherent, varied, and highly believable content. According to NewsGuard’s 2023 report, AI-generated propaganda is increasingly indistinguishable from authentic commentary, even down to regionally specific language and emotion.

This isn't junk content anymore. It’s plausible, contextual, and reactive. It looks like grassroots support, but it’s manufactured influence at industrial scale.

And the platforms still reward it. They were built to amplify what performs, not to assess what’s real. 

Moderation tools and human reviewers are not keeping up. Meta’s 2024 report on taking action against coordinated, inauthentic behavior emphasizes just how difficult it has become to detect these coordinated campaigns in real time.

This isn’t a fringe issue. It hits politics, marketing, financial speculation, even brand trust. In 2021, the U.S. Securities and Exchange Commission warned of social-media-driven market pumps fueled by bots.

Meanwhile, systems that rely on visibility and engagement (trending lists, “suggested for you” panels) are now easily hijacked. The tools designed to surface what matters now surface whatever someone pays to make matter.

Today’s bots don’t break rules. They follow them. They mimic human behavior and generate conversation. They build credibility over time and operate across networks. Because they don’t violate technical policy, they often go undetected.

This exposes a deeper flaw: systems were designed to evaluate behavior, not motivation. We trusted patterns. If it looked normal, it was assumed to be safe.

But AI doesn’t behave abnormally. It behaves convincingly.

AI shifts signals up the stack

The signal has shifted up the stack. Away from headers and rates. Into payloads, content semantics, and system-level coordination. AI-generated influence looks clean to traditional defenses. The anomaly is no longer in the envelope. It’s in the message.

Efforts to address the problem are underway. DARPA’s Semantic Forensics program is working to detect AI-generated content using intent and linguistic markers. X’s 2024 updates mention enhanced bot removal efforts. But these systems are still early-stage. The tools are not yet scalable or responsive enough to outpace AI-driven influence campaigns.

And now the threat is evolving again.

Beyond simple bots, AI-driven agents are being deployed. These agents do more than automate. They coordinate. They pivot. They analyze response data and adjust tactics in real time. A 2022 DFRLab study documented how state-backed campaigns used AI agents to orchestrate disinformation across platforms, adapting dynamically to detection.

Meanwhile, legitimate businesses are adopting agents for customer support, marketing, and workflow automation. Lyzr.ai says 70% of AI adoption efforts focus on action-based AI agents, not just conversational bots.

This blurs the lines. When agents speak for both companies and attackers, trust erodes. A fake support bot posing as a brand representative could phish users or spread misinformation, indistinguishable from the real thing unless you know what to look for.

This is no longer a bot problem. It’s an authenticity crisis.

AI evolution challenges assumptions

The new bot war, powered by advanced AI tools and coordinated agents, has redrawn the map. We are not defending against noise. We are defending against synthetic credibility that is crafted to look human, scaled to manipulate systems, and optimized to pass undetected.

The underlying assumptions we’ve built around security and scale are breaking down in the face of this shift away from infrastructure and into the exploitation of algorithms, semantics, and perceived legitimacy.

Solving it means rethinking the foundation. Tools built to enforce rules must evolve to interpret behavior, language, and coordinated patterns as part of a broader system of intent.

Until then, skepticism is our baseline and restoring trust will require more than detection.

It will take active collaboration between platforms, enterprises, and researchers to rebuild integrity into the systems we rely on every day and ensure their tactics don’t seep into the enterprise where synthetic influence could quietly corrupt decision-making, hijack automation, and erode user trust from the inside out.