Skip to main content
Daniel J Glover
Back to Blog

AI Agents: Your New Insider Threat

9 min read

AI agents are rapidly becoming an insider threat in enterprises across the globe. According to Gartner's latest estimates, 40% of all enterprise applications will integrate with task-specific AI agents by the end of 2026 - up from less than 5% in 2025. This surge represents one of the fastest technology adoption curves in enterprise history.

But with great autonomy comes great risk.

"The CISO and security teams find themselves under a lot of pressure to deploy new technology as quickly as possible, and that creates this concept of the AI agent itself becoming the new insider threat," said Wendi Whitmore, Chief Security Intelligence Officer at Palo Alto Networks, in a recent interview with The Register.

This is not speculation. It is a fundamental shift in how security teams must think about trust, identity, and access control.


Why AI Agents Become Insider Threats

Traditional insider threats involve humans - disgruntled employees, compromised credentials, or social engineering victims. AI agents introduce a new category: autonomous systems with privileged access that can be manipulated without any human awareness.

The core problem is architectural. Large Language Models (LLMs) suffer from a fundamental flaw that has no fix in sight: they cannot separate data from instructions. Any data an agent processes - the content of a web page, an email, a log entry - can effectively become instructions through prompt injection.

When an attacker successfully exploits this vulnerability, they transform what you thought was a trusted entity into a malicious one. If that agent has access to your internal data - your OneDrive, Google Drive, Salesforce, or financial systems - it effectively becomes an insider threat working against you from within.

The Superuser Problem

One of the most dangerous patterns emerging is what Whitmore calls the "superuser problem." This occurs when autonomous agents are granted broad permissions, creating a superuser that can chain together access to sensitive applications and resources without security teams' knowledge or approval.

Consider how AI agents are typically deployed:

Deployment PatternRisk LevelWhy It Happens
Broad API access 'to be helpful'CriticalDevelopers prioritise functionality over least privilege
Inherited user permissionsHighAgents run with the permissions of whoever deployed them
Cross-system integrationHighAgents need to connect multiple services to be useful
Persistent credentialsMedium-HighUnlike humans, agents never log out
Minimal monitoringHighTraditional SIEM tools do not understand agent behaviour

As I explored in Non-Human Identity: Your Security Blind Spot, non-human identities already outnumber human users 50:1 to 100:1 in modern cloud environments. AI agents are accelerating this imbalance - and most organisations lack the tools and processes to manage them.


Real Attack Scenarios

The theoretical risks are becoming practical realities. Palo Alto Networks' Unit 42 incident response team has already observed attackers changing their tactics in response to AI adoption.

The LLM-First Attack

"Historically, when an attacker gets initial access into an environment, they want to move laterally to a domain controller," Whitmore explained. "They want to dump Active Directory credentials, they want to elevate privileges. We don't see that as much now. What we're seeing is them get access into an environment immediately, go straight to the internal LLM, and start querying the model for questions and answers, and then having it do all of the work on their behalf."

This represents a fundamental shift in attack methodology. Instead of spending hours navigating your network, attackers can simply ask your AI what it knows - and it will helpfully provide answers about your infrastructure, your data, and your vulnerabilities.

The CEO Doppelganger

Perhaps more concerning is the emerging risk of AI agent impersonation at executive level. Organisations are deploying task-specific agents to help C-suite leaders manage their workload - approving transactions, reviewing contracts, responding to routine requests.

"We think about the people who are running the business, and they're oftentimes pulled in a million directions throughout the course of the day," said Whitmore. "So there's this concept of: We can make the CEO's job more efficient by creating these agents. But ultimately, as we give more power and authority and autonomy to these agents, we're going to then start getting into some real problems."

Imagine an M&A scenario where an attacker manipulates an executive AI agent through prompt injection. By using a "single, well-crafted prompt injection or by exploiting a 'tool misuse' vulnerability," according to Palo Alto Networks' 2026 predictions, adversaries could gain "an autonomous insider at their command, one that can silently execute trades, delete backups, or pivot to exfiltrate the entire customer database."

Supply Chain Attacks via Agents

AI agents frequently rely on external tools, plugins, and APIs. Each integration point is a potential attack vector. As I covered in Slopsquatting: AI Supply Chain Attacks, attackers are already exploiting the AI supply chain through hallucinated package names. The same principle applies to agentic systems - if an agent can be tricked into calling a malicious tool or API, the consequences scale with the agent's permissions.


The OWASP Response

The security community has recognised the urgency. In December 2025, OWASP released the Top 10 for Agentic Applications 2026 - a peer-reviewed framework developed with more than 100 industry experts to identify the most critical security risks facing autonomous AI systems.

The framework addresses risks including:

  1. Prompt Injection and Manipulation - Attackers inject malicious instructions through data the agent processes
  2. Tool Misuse and Privilege Escalation - Agents are tricked into using their tools for unintended purposes
  3. Memory Poisoning - Agent memory or context is corrupted to influence future decisions
  4. Cascading Failures - Errors in one agent propagate through interconnected systems
  5. Supply Chain Attacks - Compromised components in the agent's dependencies

For security leaders, this framework provides a starting point for assessing and securing agentic deployments. But frameworks alone are insufficient - practical controls must follow.


Practical Security Controls

Protecting against AI agent threats requires adapting existing security principles to a new context. The core challenge is applying zero trust architecture to entities that were designed to be trusted.

Least Privilege for Agents

"It becomes equally as important for us to make sure that we are only deploying the least amount of privileges needed to get a job done, just like we would do for humans," Whitmore emphasised.

This sounds obvious but runs counter to how most AI agents are deployed. Developers want agents to be helpful, which means giving them access to everything they might need. Security teams must push back and enforce the same access controls they apply to human users - ideally tighter, given the attack surface.

Hard Guardrails with DLP

Data Loss Prevention becomes critical when AI agents can access sensitive information. Configure DLP to:

  • Limit what information agents can access based on task requirements
  • Monitor what content agents attempt to exfiltrate
  • Block transmission of sensitive data categories (PII, financial data, intellectual property)

Browser Isolation for Web-Facing Agents

For browser-based agents that interact with external content, Menlo Security recommends deploying browser isolation. If an agent navigates to a malicious site that attempts prompt injection through web content, isolation prevents the malicious code from executing in your environment.

Monitoring and Detection

Traditional security monitoring tools were not designed for AI agent behaviour. You need visibility into:

  • What queries agents are making to internal systems
  • What tools agents are invoking and with what parameters
  • Anomalous patterns in agent activity (unusual access times, unexpected data requests)
  • Prompt injection attempts in agent inputs

Microsoft's recent guidance on securing AI agents emphasises real-time protection during tool invocation - security checks that determine whether each action should be allowed or blocked before execution.


Building Your AI Agent Security Strategy

The organisations that will manage this transition successfully are those that treat AI agent security as a strategic priority - not an afterthought. Here is where to start:

Immediate actions (this quarter):

  1. Inventory your agents - You cannot secure what you cannot see. Identify every AI agent deployed in your environment, including shadow deployments by enthusiastic employees
  2. Map permissions - Document what each agent can access. You will likely be alarmed
  3. Implement logging - Ensure you have visibility into agent queries and tool invocations
  4. Review the OWASP framework - Use the Top 10 for Agentic Applications as a checklist against your deployments

Medium-term actions (this year):

  1. Develop AI governance policies - As I outlined in AI Governance: Controls That Work, governance done well enables rather than restricts
  2. Extend IAM to non-human identities - Your identity programme must cover agents with the same rigour as humans
  3. Implement runtime protection - Deploy controls that can intercept and validate agent actions in real-time
  4. Train your SOC - Security analysts need to understand agentic attack patterns

Strategic investments:

  1. Zero trust for agents - Apply verify-explicitly principles to every agent interaction
  2. AI-aware security tooling - Evaluate emerging solutions designed for agentic security
  3. Red team exercises - Include AI agent compromise scenarios in your security testing

The Window Is Closing

"It's probably going to get a lot worse before it gets better," Whitmore said of prompt injection vulnerabilities. "I just don't think we have these systems locked down enough."

The pressure to deploy AI agents is immense. Every vendor promises productivity gains. Every competitor seems to be adopting. The business case for agents is compelling.

But deployment without security is simply moving risk from operational inefficiency to security exposure. The 40% of enterprise applications integrating agents by year-end represents a massive expansion of the attack surface - one that most organisations are not prepared to defend.

AI agents can be deployed safely. But safety requires treating them as what they are: powerful autonomous systems with the potential to become your most dangerous insider threats.

The organisations that recognise this now - and build security into their agentic strategies from the start - will be the ones that capture the productivity benefits without the catastrophic risks.

The window for getting this right is closing. Start today.

Share this post

DG

Daniel J Glover

IT Leader with experience spanning IT management, compliance, development, automation, AI, and project management. I write about technology, leadership, and building better systems.

Let's Work Together

Need expert IT consulting? Let's discuss how I can help your organisation.

Get in Touch