6 min to readThought Leadership

The agentic era is here. It’s time to act on AI governance.

Erik Stiphout
Erik StiphoutLead Product Architect - Microsoft Security
agentic-ai-governance-adobe-1494586346-blog-hero

Identity governance has always been about people. Who has access. Who approved it. Who should not have it anymore. If you have spent any time in this industry you have met that person: their account still on the company-wide distribution list two years after leaving the company, somehow still able to book meeting rooms, a ghost in the system that nobody got around to removing. Mildly embarrassing. Occasionally a compliance finding. Rarely a serious incident. 

The AI agent version of that problem is not mild and it is not embarrassing. Agents that hold mailboxes, read and write to SharePoint, and operate across your LoB systems, calendars, and internal APIs do so continuously, autonomously, and without ever appearing on a joiners-movers-leavers report. The perimeter did not move. The population inside it just changed in ways most governance frameworks were not (yet) built to see. 

The gap was already open before the agents arrived

AI adoption is not waiting for governance frameworks to catch up. Employees are already using AI tools across every business function, many of them unsanctioned, and the ownership question at the top of most organizations remains unresolved. According to a 2025 survey, CIOs control AI security decisions in 29% of organizations, while CISOs rank fourth at just 14.5%. Nearly 40% of organizations have no AI-specific governance in place at all, not because of negligence, but because AI dissolves the traditional boundaries that security was designed to defend. Networks, applications, data. AI operates across all three simultaneously and does not pause to ask who is responsible.

Organizations already managing hybrid estates, legacy dependencies, SaaS sprawl, and growing identity workloads face an immediate reality. Ungoverned AI lands in these complex environments, acting as a direct accelerant for every existing weakness.

From surface problem to active risk

Until recently the primary governance concern was data exposure. AI tools like Microsoft 365 Copilot surface content based on existing user permissions, which means every over-permissioned site, every orphaned file share, every piece of sensitive content that was never properly labelled became immediately reachable to anyone who knew how to ask. The data risk was always there. AI made it exploitable at scale. Agents are a different class of problem entirely, and the distinction matters.

Agents can be misdirected. An attacker who understands an agent's scope does not need to compromise a human identity or authenticate directly.

Agents do not just read. They act. They write. They send. They modify. They do not wait for your governance committee to schedule a meeting. An agent provisioned with access to a mailbox can draft and send email. An agent connected to SharePoint can alter documents. An agent integrated with a HRM application can update or delete records, continuously, autonomously, and without ever appearing on a joiners-movers-leavers report. Every permission gap that the first generation of AI tools made visible is now a capability that agents can exercise on behalf of whoever, or whatever, is directing them.

And that last point is where the risk sharpens considerably. Agents can be misdirected. An attacker who understands an agent's scope does not need to compromise a human identity or authenticate directly. They need to place malicious instructions somewhere the agent will retrieve them as part of a legitimate task: a document, an email, a web page the agent consults. The agent cannot distinguish a trusted instruction from a malicious one embedded in trusted content. It executes. That is what it was built to do.

The blast radius has expanded far beyond what an attacker can merely see, encompassing everything they can weaponize your own systems to execute. Gartner estimates that 40% of enterprise applications will feature task-specific AI agents by 2026. Only 6% of organizations currently have an advanced AI security strategy in place. We must therefore address this gap as an urgent, present-day reality.

The common thread among companies that manage the risk effectively is that they treat the AI deployment conversation and the data governance conversation as a single workstream.

What a secured AI deployment actually looks like

Most organizations are not managing this well yet, which makes the ones that are worth paying attention to. The common thread among companies that manage the risk effectively is that they treat the AI deployment conversation and the data governance conversation as a single workstream.

Before expanding AI access, they run a data security posture assessment to understand what their systems can actually reach and identify the highest-risk exposures first. They use SharePoint Advanced Management to identify ownerless sites, bulk-remediate overshared links, and apply Restricted Content Discovery to the most sensitive content before any agent touches it. They deploy discovery tooling to gain a complete picture of which AI applications are in use across the organization, and they give employees governed alternatives before enforcing restrictions. For custom AI deployments, they apply protection at the model layer and configure prompt inspection and content safety controls before any application reaches production data.

The real differentiator lies less in the specific tools purchased and more in how early leadership makes governance a strict precondition of deployment. The productivity case for enterprise AI is real. So is the risk case for deploying it without the right foundation. These are not competing positions. They are sequential ones.

The most common objection to AI governance investment is that it creates friction and slows adoption. The evidence runs the other way.  The organizations that establish clear ownership, clean up their data estates, and put proportionate controls in place before agentic capabilities become the default will spend less time and money on remediation. They will also have a more defensible position when regulators, auditors, or boards start asking questions that, increasingly, they will.

Three things to do now, not after the next incident

The window to get ahead of this is not unlimited and it is narrowing as agent deployment accelerates. Three priorities stand out for organizations that want to move from reactive to structured.

  1. Get visibility on what is already running. You cannot govern what you cannot see, and 32% of organizations’ data security incidents already involve generative AI tools. The imminent arrival of Agent 365 reflects a broader shift toward more structured control of AI agents. Discovery tooling that catalogues AI applications in use, assesses them against security and compliance risk factors, and maps them to your data environment is the foundation for every governance decision that follows.
  2. Address permission sprawl before agents inherit it. The oversharing problem that passive AI tools exposed becomes an operational risk the moment agents are in the picture. Identifying overshared content, applying classification at scale, and restricting AI access to the highest-risk environments before autonomous workflows reach them is targeted remediation, not a large transformation program. The organizations that do this before they need to will spend significantly less time explaining themselves afterwards.
  3. Define ownership before the technology forces the question. Security leaders consistently identify poor integration, lack of unified visibility, and fragmented tooling as their primary governance challenges. The underlying issue in most cases is not tooling. It is that nobody agreed in advance on who owns the decision. Establishing clear ownership of AI governance, a defined approval process for new AI capabilities, and explicit procurement criteria for AI-enabled products turns security from a reactive blocker into a proactive function. That distinction matters both operationally and culturally.

Join the conversation in Prague

If this is the challenge sitting on your desk right now, you are not alone and the answers are closer than they may feel. At the upcoming Cybersecurity Forum in Prague (and online), we are working through exactly what AI governance looks like in practice for organizations managing real-world constraints: hybrid estates, lean teams, legacy dependencies, and a fast-moving technology landscape. The session is designed for IT security decision-makers and procurement leaders who need a clear, actionable framework rather than another set of product slides. 

If your organization is navigating AI adoption and needs to get the governance foundation right before agentic capabilities become the default, this is the session to be in the room for. Register now and bring your hardest question. We have probably already seen it. 

A blue and purple abstract background.

Join us at Cybersecurity Forum 2026

AI is accelerating innovation but also amplifying risk. Get more expert perspectives on how to adapt at 6th annual Cybersecurity Forum – in Prague and online.

Join us at Cybersecurity Forum 2026

AI is accelerating innovation but also amplifying risk. Get more expert perspectives on how to adapt at 6th annual Cybersecurity Forum – in Prague and online.

Author

Erik Stiphout

Erik Stiphout
Lead Product Architect - Microsoft Security