Cybercrime’s AI phase is accelerating
Generative AI is no longer just changing productivity software and consumer tools. It is also reshaping online fraud and cybercrime. In its April 24 edition of The Download, MIT Technology Review highlighted a core trend: AI-driven scams are expanding, and organizations are struggling to keep pace with the scale and speed of the attacks.
The publication traces the shift back to the release of ChatGPT in late 2022, when large language models made it easy to generate convincing human-like text. Cybercriminals quickly recognized the value. According to the supplied text, they began using LLMs to compose malicious emails, and have since expanded into turbocharged phishing, hyperrealistic deepfakes, and automated vulnerability scans.
The direction of travel is what matters most. AI is lowering the cost of attack creation while increasing volume and plausibility. That combination changes the security equation for nearly every organization with a public digital footprint.
Why the problem is getting worse
MIT Technology Review’s formulation is blunt: AI is making attacks faster, cheaper, and easier to carry out. The article further says many organizations are struggling to cope with the sheer volume of cyberattacks and that the problem is likely to worsen as more criminals adopt these tools and the tools themselves improve.
That is a structural warning, not a one-off anecdote. Traditional cybersecurity defenses often depend on some combination of friction, detectability, and attacker cost. Generative AI weakens all three. It allows bad actors to produce polished text, mimic voices or images more credibly, and automate research or scanning tasks that once required more time or skill.
The result is not just better phishing. It is industrialized targeting.
From malicious emails to synthetic persuasion
The first visible wave of criminal AI use was text generation. If phishing used to be slowed by bad grammar, awkward phrasing, or inconsistent style, that barrier has eroded. Large language models make it easier to generate emails that sound coherent, context-aware, and tailored to a target.
But the supplied report makes clear the field has moved past email composition. Hyperrealistic deepfakes extend fraud into voice, video, and identity simulation. Automated vulnerability scans add a technical layer, helping attackers probe systems at speed. These are not isolated tactics. Used together, they can support wider campaigns that combine social engineering with opportunistic system exploitation.
That convergence is what makes the current moment distinct. AI is not just a new tool in the attacker toolkit; it is increasingly the connective layer that helps fraud operations run at scale.
Why organizations are under pressure
The challenge for defenders is not only technical sophistication. It is volume. A modestly capable attacker can now generate far more tailored messages, variants, and test cases than before. That creates noise, raises the chance of a successful hit, and forces defenders to spend more time triaging.
MIT Technology Review’s warning that organizations are struggling with the sheer number of attacks captures a shift that many security teams have already felt. Even when any single scam is not especially advanced, the cumulative effect of many AI-assisted attempts can overwhelm staff and systems.
This is especially true when deception extends across channels. If email, audio, and video can all be cheaply synthesized or adapted, verification becomes more labor-intensive. Trust workflows that once relied on recognizing a tone, a writing style, or a familiar face become less dependable.
The bigger significance of the warning
The publication labels “supercharged scams” as one of the 10 things that matter in AI right now. That editorial framing matters because it puts criminal misuse alongside mainstream model development and commercial deployment as a defining feature of the field’s current phase.
In other words, AI risk is not a side conversation to the AI boom. It is part of the boom.
The supplied text does not offer a specific policy fix or defensive blueprint. But it does support a strong conclusion: the security implications of generative AI are no longer hypothetical, and the attack surface is widening as capability diffuses.
What this means for the next stage of AI adoption
As AI systems become cheaper and more embedded in ordinary software, the criminal learning curve is likely to flatten further. Tools that begin as general-purpose productivity systems can still be repurposed, adapted, or imitated for malicious use. Each improvement in realism, speed, and accessibility affects both legitimate and illegitimate actors.
That does not mean every new AI feature increases cybercrime in a straight line. But the supplied reporting clearly indicates that the barriers to launching persuasive scams have already fallen. The concern now is less whether criminals will use AI and more how quickly defenses can adapt to routine AI-assisted deception.
A security story, not just an AI story
The temptation in AI coverage is to focus on frontier models, competitive launches, and product rollouts. MIT Technology Review’s emphasis on scams is a reminder that the most immediate social effects of AI may arrive through misuse, not innovation branding.
That makes this a governance and operational issue as much as a technical one. Organizations that think about AI only as a tool for internal productivity may miss the more urgent reality: adversaries are adopting the same class of tools to attack more efficiently.
The article’s core warning is therefore straightforward and credible. AI has already changed cybercrime economics. The scams are more scalable, the outputs more convincing, and the burden on defenders heavier. That is likely to remain true even as the underlying models continue to evolve.
This article is based on reporting by MIT Technology Review. Read the original article.
Originally published on technologyreview.com








