X is suspending bots at scale, but many human users are getting swept up too

X’s latest anti-bot campaign is moving at a pace that signals urgency. According to WIRED, the platform’s head of product, Nikita Bier, said on April 9 that X was flagging and suspending bots at a rate of 208 accounts per minute and rising. The stated target is automated, fake, inactive, or spam behavior that distorts engagement and degrades the platform. But the visible effect for many users has been broader than that mission statement suggests.

WIRED reports that the crackdown has also suspended or deleted human-run alternate accounts, including accounts used privately to bookmark, like, repost, or quietly follow niche adult content. Those so-called alt accounts often existed outside a user’s public identity and were used less for posting than for personal curation. In a system built to detect inauthentic behavior, those patterns appear to have made some human users look machine-like.

The problem illustrates a long-running tension in trust-and-safety work: the faster a platform acts at scale, the more likely it is to collide with edge cases that are not malicious. Private or low-activity accounts can resemble spam accounts on paper, especially if they mainly lurk, repost, or engage in narrow patterns around a specific kind of content. That does not mean the enforcement goal is illegitimate. It means the cost of blunt detection systems is being borne by users whose behavior is unusual but not necessarily prohibited.

Private behavior, not public posting, appears to be the common thread

One of the more revealing details in WIRED’s reporting is that some affected users say they rarely or never posted from these accounts. Instead, they used them to organize and consume adult material away from their main social identities. That matters because it suggests X’s enforcement may be reading passivity, anonymity, or repetitive engagement patterns as signs of manipulation. The platform’s policy prohibits inauthentic activity that undermines the integrity of X, but the line between spam-like behavior and private, highly specialized curation is not always obvious to users caught in the net.

WIRED also notes that this is not an isolated clean-up. In October, Bier’s team said X removed 1.7 million bots in an effort to reduce reply spam, with plans to turn attention to direct-message spam next. The current wave therefore fits into a broader product strategy rather than a one-off moderation blitz. X is trying to reassert control over account quality and reduce the visible signs of platform manipulation. The complication is that the company has not publicly detailed how many genuine bots have been removed in the newest push, nor how many human users may have been wrongly affected.

That lack of transparency leaves users to piece together the logic from anecdote and loss. For people who had spent years building private archives, the damage felt immediate and personal. The reaction described by WIRED was dramatic, but it was also understandable: entire histories of curation disappeared in a weekend, with little explanation and no clear sense of whether recovery was possible.

The purge says as much about platform design as it does about porn

It would be easy to treat this only as a story about adult content, but the underlying issue is bigger. Many social platforms encourage users to segment identity across accounts for privacy, professionalism, fandom, politics, or sexuality. A system that aggressively punishes low-visibility or narrowly patterned behavior can therefore collide with legitimate user practices that the platform itself helped normalize over time.

X’s latest purge also shows how moderation tools aimed at one problem can destabilize a very different part of platform culture. A campaign designed to cut fake engagement ended up altering how some people manage anonymity, desire, and personal archives online. Even if the company sees those accounts as collateral damage in a larger clean-up, the effect is a reminder that account enforcement is not neutral infrastructure. It shapes what kinds of private behavior remain possible on a public network.

There is a real case for removing automated spam and fake amplification, and X has been under pressure for years to do more of it. But when enforcement expands quickly and explanation lags behind, users are left with a different lesson: the platform can erase a meaningful slice of their digital life without clearly distinguishing a bot from a person who simply wanted a separate room. In that sense, the latest purge is not just a moderation story. It is a story about how fragile online identity becomes when the systems defining authenticity are both opaque and automated.

This article is based on reporting by Wired. Read the original article.

Originally published on wired.com