Stopping the quiet drift toward excessive agency with re-permissioning

8 hours ago 5

Treating AI agents like "harmless helpers" is a disaster in the making. If you don't audit their access now, your automation will eventually become a liability.

In their infancy, LLM models were not difficult to contain. You gave a prompt; they responded, and if something was wrong it was usually “just text.” This could take the form of a summary that missed the best bits, a tone-deaf line or a wordy sentence.

But then, agents were co-opted as the core reasoning layer inside AI agents, and the game changed overnight. Agents connect databases and business applications, interact with external systems and execute multi-step tasks.

So, the question isn’t only, “How capable is the model?” The more important question I believe is, “How are AI agents being treated and permissioned inside your environment?”

The failures that sting aren’t limited to moments when an agent spouts inaccuracies or conjures hallucinations; they also occur when the agent takes actions it shouldn’t, simply because it has the capability, the permissions and the autonomy to do so.

The shift from answering to execution

I’m seeing interoperability accelerate agent adoption. Standards like the Model Context Protocol (MCP) are making it easier for models to connect with tools and data sources, while agent-to-agent approaches allow agents to exchange context, goals and actions across workflows.

More connections mean more reach, and more reach means more room for things to go wrong.

With AI spending forecasted to hit $2.5 trillion in 2026, and with 40% of enterprise apps expected to embed task-specific AI agents by the end of 2026, the real question is no longer about adoption, it’s about visibility and control. With numbers like these, it is clear that AI integration is scaling quickly, but there is a security gap.

While AI security checks are catching up quickly, rising from 37% in 2025 to 64% in 2026, that still leaves over a third without a formal assessment. This is why the right permissioning often lags behind.

As I have observed, when agents operate across multiple tools and systems, organizations are no longer managing just “AI output quality.” They’re managing action pathways, often in environments where it’s difficult to pinpoint where a request went wrong, where an input was manipulated, or which step triggered the final action. Permissioning, in this context, becomes the difference between useful automation and unauthorized behavior at scale.

Excessive agency directly proportional to over-permissioning

Organizations are worried about the level of autonomy AI introduces into their operational framework. Nearly three-quarters of organizations say agents often receive more access than necessary. It’s this excessive agency that needs to be reined in.

In practice, unchecked autonomy within a particular workflow means the agent can access systems it doesn’t need, execute actions outside its predetermined role and interact with external systems beyond predefined parameters. This means organizations are not just looking at a ‘wrong answer’ as the biggest risk, but ‘unauthorized action.’ This action may involve unintended data exposure, unauthorized commands or integrity-impacting changes that are difficult to unwind.

Over-permissioning is a sneaky beast. I’ve seen it slowly creep into agentic AI workflows, usually driven by three common factors:

  • The people in charge, in their ‘wisdom,’ enable a broad range of tools/APIs to make the agent even more useful.
  • There might be some integration problems, and elevated access is given to make integration work smoothly, which means extra permissions that exceed the safe-use threshold.
  • Agents can decide with fewer human checkpoints, especially for actions that have a tangible impact. This can stem from a blind trust in AI and a focus on being an execution-first business.

3 systemic risks in agentic AI workflows

Less than half of businesses have adopted formal risk management frameworks for AI, and I believe that’s where the real challenge with agentic AI begins. It’s not about what it can do, but that its actions become harder to observe and govern once it operates across connected systems.

First, many models are effectively black boxes. Opaque internal workings make it harder to verify outputs, explain decisions or confidently audit what happened after the fact.

Second, capability invites overreliance. In conversations I’ve had with CISOs, a consistent theme emerges. As agents appear to “handle it,” humans step back and critical reviews thin out. The result is mistakes and biases persisting longer because fewer people are watching closely, especially dangerous in high-stakes environments.

Thirdly, attackers don’t need to compromise the model itself if they can compromise what the agent reads or the services feeding it. Connected workflows create supply-chain-style attack modes, where upstream manipulation becomes the lever.

The road toward re-permissioning: Controlling agency

Re-permissioning is not about limiting the autonomy of AI agents, but more about controlling them appropriately. AI agents execute, and we need them to execute well, but we must implement a continuous permission audit to identify agents slowly climbing the ‘agency’ ladder.

Organizations must have complete visibility so they can evaluate agentic AI interactions, flag irregular behaviors, verify if permissions conform to policy and use tabletop real-world exercises like prompt-injection tests to guard against vulnerabilities. Also, subscribe to a human-in-the-loop workflow in which human oversight is mandatory when sensitive data, financial decisions, access changes or major operational updates are involved.

It’s also necessary to avoid giving agents tools ‘just in case they need them.’ Instead, implement least-privilege context sharing, limiting the agent’s view and tool access to only what the task truly requires.

Finally, let me emphasize that you shouldn’t forget the agent AI supply chain that includes integration, libraries, APIs and third parties. These need to be vetted, patched and secured with tight network controls to build a trusted ecosystem and reduce the risk of upstream manipulation.

If AI agents are treated like harmless helpers, they’ll be permissioned like harmless helpers, and excessive agency becomes normalized.

We must pump the brakes on the inevitability of unchecked autonomy. Take control of broader functionality and permissions; focus on instilling oversight where it matters. Agents can enhance operations, but only if they’re governed as actors within guardrails and not trusted by default.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

SUBSCRIBE TO OUR NEWSLETTER

From our editors straight to your inbox

Get started by entering your email address below.

Read Entire Article