The Debate Has Shifted From Possibility to Present Tense
Google says the cybersecurity conversation around artificial intelligence has moved into a new phase. According to reporting from The Guardian, the company’s threat intelligence group concluded that AI-powered hacking has gone from a nascent issue to an industrial-scale threat in just three months.
The warning is significant because it reframes a debate that has often focused on what advanced models might enable in the future. Google’s analysts argue that the future tense is already outdated. In their view, threat actors are using commercial AI tools now to improve speed, scale and sophistication across real campaigns.
That distinction matters for defenders. If AI were only a looming capability, organizations could treat it as a planning problem. If it is already embedded in active operations, it becomes an immediate operational problem, one that affects vulnerability management, detection, incident response and the pace of defensive patching.
Commercial Models Are Part of the Threat Picture
The Guardian reported that Google’s assessment found criminal groups and state-linked actors from China, North Korea and Russia appear to be using commercial models, including Gemini, Claude and tools from OpenAI, to refine and scale attacks. The report does not say those companies are intentionally enabling malicious use. The more important point is that broadly available, high-capability systems are now part of the offensive toolkit.
Google threat analyst John Hultquist said there is a misconception that an AI vulnerability race is imminent when in reality it has already begun. He said threat actors are using AI to improve persistence against targets, test operations, build better malware and make other incremental gains.
Those incremental gains can matter as much as headline-grabbing breakthroughs. Attack campaigns often succeed because they become cheaper, faster and easier to repeat, not because every operation is radically novel. If AI reduces friction across reconnaissance, malware refinement, phishing variation or exploit testing, then the cumulative effect can be substantial even without fully autonomous cyber offense.





