The Arms Race Between AI Fraud and AI Defense

Artificial intelligence has made it trivially easy to generate convincing phishing emails, fake job listings, and deepfake recruitment videos. Now cybersecurity companies are deploying AI of their own to fight back — and the results are mixed. NordVPN recently launched a scam detection feature that uses machine learning to analyze suspicious messages and links in real time. The question is whether it can reliably detect the very AI tools being used against users.

The timing is significant. As generative AI has matured, the sophistication of online scams has increased dramatically. Fraudulent job postings now arrive with polished cover letters, realistic company profiles, and personalized references to the target's work history. Simple keyword-based filters are no longer enough.

What NordVPN's Scam Checker Actually Does

The feature works by analyzing both the metadata and content of URLs, emails, and messages. When a user flags something as suspicious, the checker runs it against a database of known threat patterns while simultaneously applying language model analysis to identify deceptive intent, mismatched details, and manipulation tactics.

Unlike earlier rule-based systems, NordVPN's approach uses a form of adversarial training — it was taught on examples of AI-generated scams, meaning it has seen the patterns these tools produce. This is theoretically an advantage, but creates its own arms race dynamic: as scam generators improve, detection tools must be retrained to keep pace.

Testing Against AI-Generated Recruitment Scams

Real-world testing against advanced recruitment scams — the kind generated by large language models and targeted at professionals — revealed a nuanced picture. For straightforward phishing attempts, the tool performed well, correctly flagging suspicious links and implausible sender details. The challenge came with more sophisticated examples.

AI-generated recruitment scams increasingly impersonate real companies, reference genuine employees, and use plausible job descriptions. In these cases, the scam checker's accuracy dropped, particularly when the fraudulent contact was routed through legitimate platforms like LinkedIn or email services with clean sender reputations.

This is a known limitation: AI detection tools struggle when scammers use trusted infrastructure. A fake recruiter using a real corporate email domain, referencing an actual job posting, and providing a meeting link to a legitimate video conferencing service can slip through automated filters regardless of how sophisticated they are.

The Limitation No Detector Can Solve

The fundamental challenge for any scam detection system is that the same AI capabilities enabling fraud also make detection harder. A language model that generates convincing human text also generates text that scores well on standard authenticity metrics. Detection tools need to rely on behavioral signals — timing patterns, unusual request sequences, cross-referencing with known fraud networks — rather than content alone.

NordVPN's tool showed promise on behavioral analysis, correctly identifying several scams that passed content scrutiny but exhibited suspicious link structures or asked for sensitive information unusually early in a conversation. This suggests the most defensible strategy for AI scam detection is looking at patterns across a conversation rather than analyzing any single message in isolation.

Broader Implications for Cybersecurity

What this test illustrates is that the cybersecurity industry is entering a phase where AI-versus-AI conflict will become a permanent feature of the threat landscape. The companies best positioned to defend against AI-generated fraud are those with the largest training datasets of real-world scam examples — a data moat that established security firms have over newer entrants.

Users, meanwhile, should not treat any single tool as definitive protection. The best approach combines automated detection with personal verification habits: independently confirming recruiter identities, being wary of any process that moves unusually fast, and treating requests for financial information or personal documents early in a relationship as red flags regardless of what a checker says.

The broader story here is one of technological democratization cutting both ways. AI has made sophisticated fraud accessible to low-skill attackers and made detection tools more capable. The defense, for now, is not running ahead — but it is keeping pace.

This article is based on reporting by ZDNET. Read the original article.