Defensive AI is advancing, but access is uneven

The rise of powerful artificial intelligence tools is changing cybersecurity in two directions at once. Attackers are using models to find and exploit vulnerabilities faster than before, while a smaller group of major firms and institutions is gaining access to advanced defensive systems that can identify weaknesses at scale. The result, according to reporting by Rest of World, is a widening global cybersecurity gap in which resource-rich organizations may become more resilient even as everyone around them becomes more exposed.

The article centers on Anthropic’s Mythos Preview, which the company said had discovered thousands of vulnerabilities in major operating systems and web browsers. Initial access to the tool was given to roughly 40 technology firms and institutions. But that access did not extend to most governments and central banks, leaving many public-sector and lower-resourced organizations dependent on a handful of large AI companies to help secure critical systems.

That asymmetry matters because the threat environment is accelerating. Rest of World cites data from CrowdStrike showing that AI-enabled entities increased attacks by 89% in 2025 versus the prior year. The piece argues that AI systems can now weaponize software vulnerabilities within hours of discovery, compressing the already difficult timeline between identifying a flaw and exploiting it.

Why the gap could become systemic

Cybersecurity has long been unevenly distributed. Wealthy firms can hire deep technical teams, purchase expensive tools and maintain mature incident response capabilities. Smaller companies, local institutions and developing states often cannot. What changes in the AI era is speed. If machine-driven attack tools can scan, adapt and generate exploit paths much faster than human teams, then the organizations already operating with thin staffing and legacy systems face a steeper disadvantage.

The source text highlights another pressure point: labor. A large global shortage of cybersecurity professionals means that even where leaders understand the threat, there may not be enough experienced people available to absorb it. AI can in theory help fill that gap, but only if strong defensive tools are widely available, affordable and deployable in the environments that need them most.

That is not the world described in the report. Instead, the most capable defenses appear to be concentrated among top-tier firms and select partners. If widely used commercial software gets patched quickly while more customized or sovereignty-driven systems lag, then the gap is not only between rich and poor organizations. It is also between software ecosystems with direct ties to major U.S. technology companies and those without them.

Attack automation is lowering the skill floor

The cultural and political significance of this shift goes beyond enterprise IT. AI tools can help criminals produce phishing emails, deepfake videos, voice clones and malware with much less effort than before. They can also help identify vulnerable targets and generate exploitation workflows. In effect, AI can reduce the amount of expertise required to do damage.

That dynamic expands the range of actors who can participate in cybercrime or disruption campaigns. The Rest of World report includes an example of a North Korean hacker group that used AI tools from OpenAI and Cursor in an operation that allegedly stole up to $12 million in cryptocurrency over a period of months. Whether such tools are used directly for coding, reconnaissance or social engineering, the pattern is the same: more capability is becoming available to more attackers at lower cost.

For defenders, that creates a lopsided equation. A hospital, local bank or regional utility may need to secure every critical system, vendor pathway and employee workflow. An attacker, by contrast, needs only one effective opening. AI widens that mismatch if it can test more openings faster than under-resourced teams can close them.

No one stays insulated for long

One of the report’s strongest points is that cyber risk does not remain neatly local. Smaller institutions and less-protected nations are part of the same financial, communications and software networks that connect the global economy. A weak link in one jurisdiction or industry can become a pathway into others through vendors, payment systems, partner networks or infrastructure dependencies.

That means the concentration of defensive AI among a limited set of organizations may produce private gains without delivering public safety at scale. Even the best-defended multinationals remain exposed to suppliers, customers and state systems that may be slower to detect and patch flaws. In that sense, unequal access to defensive AI is not just a fairness problem. It is a collective security problem.

The article quotes observers who argue that “cybersecurity is never an isolated problem,” and the logic holds. If one part of the system remains far behind, the whole system becomes harder to trust.

The policy challenge ahead

The source material does not offer a detailed regulatory blueprint, but it points toward a central policy dilemma. The companies developing frontier defensive models may have legitimate reasons to restrict access, including concerns that the same tools could be misused for offensive work. Yet severe restrictions can leave the broader world exposed at exactly the moment when attack automation is becoming cheaper and faster.

That tension will likely shape the next phase of AI governance in cybersecurity. Governments may push for public-interest access arrangements, secure evaluation frameworks or partnerships that broaden defensive coverage without simply releasing high-risk tools into the open. Meanwhile, organizations with limited resources may need to focus on practical resilience: reducing attack surface, patching faster, segmenting systems and preparing for incidents that are increasingly likely to involve AI on the other side.

The deeper cultural shift is already visible. Cybersecurity is no longer just about defending networks from human adversaries using software. It is increasingly about defending institutions from software that helps build better adversaries. If access to the best defensive AI remains narrow, the gap between those who can keep up and those who cannot may define the next era of digital inequality.

This article is based on reporting by Rest of World. Read the original article.

Originally published on restofworld.org