Security is no longer an add-on argument

One of the more consequential shifts in enterprise technology is also one of the least glamorous: cybersecurity is being pushed from the edge of product strategy toward the center. A session highlighted by MIT Technology Review’s EmTech AI event frames that shift in stark terms, arguing that security systems already under strain are being challenged further as AI increases complexity and expands the attack surface.

The supplied source text is from a sponsored session description rather than a reported news article, but the premise it outlines is still revealing. The claim is that legacy cybersecurity approaches are becoming harder to defend in an AI-rich stack, and that security must be designed with AI at its core instead of layered on after deployment.

Why the framing matters

That argument reflects a wider change in how organizations think about risk. In earlier eras of software adoption, security was often treated as a compliance function or a late-stage control process. AI complicates that model because it introduces new classes of data handling, automation, inference, and system behavior that can create vulnerabilities upstream of conventional defenses.

In practical terms, AI can widen exposure in several ways. It can accelerate application development, introduce opaque model behavior, change where sensitive data flows, and increase dependency on connected services. Even when none of those outcomes automatically produce a breach, they make governance and assurance more demanding.

Security debt meets AI complexity

The session description says cybersecurity was already under strain before AI entered the stack. That point deserves emphasis. Many enterprises are dealing with years of accumulated security debt: fragmented tooling, inconsistent identity controls, cloud sprawl, incomplete asset visibility, and uneven data governance. AI does not replace those problems. It compounds them.

That helps explain the source text’s warning about the limits of legacy approaches. A defensive architecture built for static applications and predictable workflows may not be enough when systems are increasingly adaptive, model-driven, and spread across hybrid environments.

A view from the security vendor side

The featured speaker, Tarique Mustafa of GC Cybersecurity, is presented in the source material as a longtime builder of AI-powered cybersecurity and data compliance systems with deep experience in areas such as data classification, data leak prevention, and data security posture management. The event framing emphasizes autonomous collaboration, large-scale inference, and the idea of rethinking data protection through AI-native methods.

Because the source is sponsored, those claims should be read as positioned arguments rather than independently verified outcomes. Still, they capture a real strategic direction in the market: security vendors increasingly believe that defense systems must become more automated, more context-aware, and more deeply integrated with the data environments they protect.

From perimeter thinking to embedded resilience

The broader implication is that cybersecurity is being reconceived less as a perimeter and more as infrastructure. If AI systems are deeply embedded in workflows, decision support, and enterprise data flows, then security has to be embedded with comparable depth. That includes where information is classified, how permissions are enforced, how anomalies are surfaced, and how exfiltration is detected before damage spreads.

This is one reason AI-era security debates often converge on architecture rather than only on products. The question is not just which tool to buy. It is how to build systems so that intelligence, automation, and protection reinforce one another instead of opening gaps.

What this says about the next phase of enterprise AI

The most useful takeaway from the EmTech framing is not a specific product pitch. It is the recognition that AI adoption and security design can no longer be sequenced as separate steps. Organizations that deploy first and secure later may find that later becomes far more expensive and far less effective.

As AI capabilities spread through enterprise software, the winners are unlikely to be the companies that simply add more models. They are more likely to be the ones that can prove their systems remain governable, inspectable, and resilient under AI-driven change.

That is why cybersecurity is emerging as one of the most important innovation stories inside the AI economy. The real test is no longer whether companies can build intelligent systems. It is whether they can build them without making themselves harder to defend.

This article is based on reporting by MIT Technology Review. Read the original article.

Originally published on technologyreview.com