← Insights | 2025-11-18

Agentic Governance: Who Authorized That?

Autonomy without accountability creates unmanaged risk. Why agentic systems require identity-first governance.

The New Insider Threat

In 2024, much of the security conversation around AI focused on prompt injection, whether a user could coerce a chatbot into producing inappropriate or misleading output.

By late 2025, the risk profile has shifted. Organizations are increasingly deploying AI agents that do not merely generate text, but take actions.

In certain environments, agents are now used to write and deploy code, execute database queries, modify cloud configurations, and initiate operational workflows. These systems can be powerful force multipliers. They can also introduce a new and underappreciated attack surface.

An agent that is authorized to execute actions can, in practice, have an impact comparable to that of a human user with similar privileges. When such agents are deployed with overly broad permissions, weak identity boundaries, or limited oversight, they represent a form of insider risk, whether intentional or accidental.

The Risk of Excessive Agency

The OWASP Top 10 for Large Language Model Applications (2025) identifies Excessive Agency (LLM06) as a critical vulnerability category. It describes systems where agents are granted capabilities or autonomy beyond what is operationally necessary.

Common contributing factors include:

  1. Excessive functionality: Access to tools or integrations that exceed the agent’s defined role.
  2. Excessive permission: Read or write access to data and systems outside the agent’s intended scope.
  3. Unbounded autonomy: The ability to execute high-impact actions without secondary authorization or validation.

Consider a customer support agent granted write access to a production database to “check order status”. In such a configuration, a single compromised prompt, misclassification, or logic error could result in data modification or deletion, far beyond the original intent of the system.

This is not a hypothetical concern. It’s a predictable outcome of insufficient governance.

Identity-First Agent Architecture

At Evodant, we treat agents not as generic software components, but as identity-bearing actors within a system.

In high-assurance environments, this implies applying established Identity and Access Management (IAM) principles to agents, just as we do for human users and service accounts.

1. Agents Require Explicit Identity

An agent should not implicitly inherit the permissions of its developer or host application. It should be assigned a distinct service identity, authenticated through the organization’s identity provider, and scoped according to its defined role.

2. Least Privilege Applies to Agents

Agents should be restricted to the minimum data and tools required to perform their function (the usual principle of least privilege). Logical separation between agents, both in data access and tool availability, reduces the blast radius and limits the impact of compromise or error.

3. Human-in-the-Loop for High-Consequence Actions

For actions with material impact, such as deleting data, transferring funds, or modifying production configurations, fully autonomous execution introduces significant risk.

In these cases, a safer pattern is staged execution:

  • The agent prepares or drafts the action.
  • The proposal is presented to an authorized human operator (human-in-the-loop).
  • Execution proceeds only after explicit approval, supported by cryptographic provenance and audit logging.

This approach preserves efficiency while maintaining accountability.

Governance at Machine Speed

AI systems operate at machine speed. Governance mechanisms must be designed to function at that same pace, without relying on informal trust or manual review as the default safeguard.

As agentic systems become more capable, organizations must extend their security models beyond networks and endpoints to include intent, authorization, and provenance.

If an agent acts and you cannot clearly answer who authorized that action, under what identity, and within which constraints, governance has already failed.

Agentic capability without identity-first controls is not autonomy, it’s unmanaged risk.