Security & Governance for Agentic AI
Historically, data pipelines have involved humans at every step, with people responsible for interpreting outputs and determining the actions that follow. The introduction of agentic AI begins to change this dynamic. AI agents can interpret data, make decisions, and take actions within systems without explicit approval at every step.
Traditional data governance has focused on who can access data and how that data is structured and used. Agentic AI extends this challenge further, introducing the need to govern not only access to data but also the behaviour of systems that can act on it. However, governance frameworks for systems capable of autonomous action are still emerging, making careful design and oversight essential.
What “Agentic AI” Actually Means in Practice
In the rapidly changing AI and data landscape, it is helpful to clarify what we mean by terms such as AI, copilots, and agents. The term ‘artificial intelligence’ itself has evolved over time. For many years, it was commonly used to describe advanced machine learning models, particularly neural networks capable of identifying patterns in complex data. More recently, the term has broadened to include specialised neural networks for language, large language models (LLMs), and systems built on top of them, such as copilots and agents.
Copilots and agents are typically underpinned by the same LLM technology. However, their roles differ. Copilots recommend actions and assist users. Agents can also act autonomously. They can trigger workflows, update records, initiate transactions, and call APIs across systems. This includes making requests and calls to other agents. Despite this additional ability to act, agents still have an identity within a system and are bound by rules and permissions in the same way as a human user would be. In practical terms, this means AI systems are no longer just interpreting data within enterprise environments; they are beginning to act within them.
Why Traditional Data Governance Is No Longer Sufficient
Much of my personal experience working with data has involved developing semantic models on the Fabric platform. Well-designed models are unambiguous, and proper governance ensures that system identities, whether human or AI, only have the minimum access to data that they require. AI prompts can trigger a semantic model query, and sound model design, combined with proper governance, helps ensure that the AI generates appropriate queries while respecting access controls and constraints.
Traditional governance commonly falls into one of four main pillars:
- Data quality — is the data accurate and correct?
- Access control — who can access the data?
- Lineage — how did the data reach its destination?
- Compliance — does the data meet legal and regulatory requirements?
These are sufficient when humans are responsible for the subsequent decisions on how to act. However, we need to consider more when AI can act autonomously:
- Actions — what actions can the agent take?
- Approvals — which approvals can be bypassed?
- Logging — how are actions recorded and monitored?
- Ownership — who is accountable for the outcomes?
These considerations are not technically new; we already govern people’s actions in these ways. The additional risk with AI is that it can act very quickly and across multiple systems. A small error in one action can trigger further errors downstream. With each iteration, the error compounds. Humans tend to catch mistakes, but an agent without proper governance cannot recognise when something has gone wrong and may continue repeating the error at scale.
What Leaders Should Be Asking Right Now
Business leaders considering the exploration of agentic AI in their organisation should first consider the state of their data estate and its readiness to accommodate agents. This primarily concerns the traditional data governance pillars. If the organisation’s data is not clean, well structured, and secure, an effective AI agent cannot be implemented.
If the data estate and governance are mature, the next consideration is the role you would want an agent to have. The principle of least privilege applies: agents should not have unrestricted access by default. Organisations must determine what actions an agent can take and what data it requires to make those decisions accurately.
Agent autonomy exists on a spectrum:
- Advisory — the agent takes no actions and operates as a copilot.
- Human-in-the-loop — a human must approve actions before they are executed.
- Human-on-the-loop — the agent acts autonomously when defined conditions are met.
- Autonomous execution — the agent determines which actions to take within its permissions.
- Multi-agent autonomy — multiple agents can coordinate and delegate tasks to each other.
As autonomy increases, organisations gain greater capability but also assume greater risk. Regardless of the level chosen, agents remain constrained by the permissions granted to them and should operate under appropriate oversight.
Monitoring, reporting, and fail-safes help mitigate these risks. Logging provides visibility of the agent’s actions, clearly defined ownership enables a rapid response to errors, and appropriate safeguards allow actions to be revoked if necessary. With these controls in place, organisations can confidently begin exploring initial proofs of concept.
Final Takeaways
AI agents are increasingly common in software services, and organisations that can successfully implement bespoke agents gain significant operational advantages. However, implementing these systems without appropriate guardrails can introduce risks that are difficult to correct once they propagate across systems. These risks are best mitigated with a clear governance strategy from the outset.
As AI systems move from analysing data to acting on it, organisations must ensure governance evolves accordingly without slowing implementation. When designed intentionally, governance provides the structures that allow organisations to innovate quickly while maintaining control.