AI agents are becoming a governance problem, not just a productivity tool

As companies prepare to deploy AI agents across business workflows, security and governance are becoming central obstacles to scaling the technology safely.

An MIT Technology Review Insights article produced in association with the Deloitte Microsoft Technology Practice argues that agentic AI can open a new enterprise attack surface. The concern is that insecure agents may be manipulated into accessing sensitive systems, proprietary data, or tools beyond their intended role.

The article is sponsored content rather than MIT Technology Review editorial reporting, but it includes survey figures and a clear enterprise risk thesis. According to the Deloitte AI Institute 2026 State of AI report cited in the piece, nearly 74% of companies plan to deploy agentic AI within two years. Only 21% report having a mature model for governance of autonomous agents.

Non-human identities are multiplying

One of the article’s key points is that modern enterprises already manage a growing number of non-human identities, such as service accounts, machine credentials, automated workflows, and software actors. Agentic AI could accelerate that trend because agents may need permissions, data access, tool access, and the ability to act on behalf of users or business functions.

That creates a different risk profile from ordinary chatbot use. A conversational system that answers questions is one thing; an agent that can retrieve files, call internal tools, write to systems, or initiate actions is another. Governance has to define what the agent is allowed to do, whose authority it is using, and how its behavior is monitored.

The source article says executives are most concerned with data privacy and security, cited at 73%. Legal, intellectual property, and regulatory compliance follow at 50%, while governance capabilities and oversight are cited at 46%.