Least Privilege Research Report 2026

The World Is Overpermissioned.
That’s a Problem for AI Agents.

Your employees hold access to hundreds of capabilities in your applications. They use 4% of them. 
The other 96% sits dormant—underscoring the risk of handing human permissions to an agent.

45% of enterprises

apply the same privileged access controls to AI agents as they do to human identities
According to recent research from CyberArk. Compounding the risk of unused permissions transferring to agents.

Scope of the Research

THE SAMPLE

This research examines permission usage in a 90-day window across

2.4 million*
end users across production applications
3.6 billion*
application permissions
THE DATA

We analyzed real-world production data to quantify

1.
How many permissions users have
2.
The relative risk of those permissions
3.
How many of those permissions each user actually exercises
THE GOAL

Size the risk surface for AI agents operating within human-permissioned systems

Definitions

User

The actor taking an action. Users can include people, services, or AI agents.

Examples:

Sarah, a sales rep logging into your CRM

An AI agent summarizing customer support tickets

A billing service accessing payment records

The GitHub integration syncing repository data

Resource

The object being acted upon. Resources include data, documents, records, or any entity in your system that requires access control.

Examples:

An invoice in your billing system

A customer's email address (PII)

A Slack channel with confidential product discussions

A database containing financial records

A specific Salesforce opportunity

Action

What the actor does to the resource. Actions define the specific operation being performed.

Examples:

view — Read a customer support ticket

edit — Modify an opportunity's close date

delete — Remove a draft invoice

export — Download a CSV of user data

share — Grant another user access to a document

Permission

The capability linking a user, action, and resource. Permissions answer the question: "Can this user perform this action on this resource?"

Examples:

Sarah can edit opportunities in the Enterprise segment

The billing agent can view payment records but cannot delete them

Support engineers can view customer PII but cannot export it

An AI agent can read documentation but cannot modify production configs

Marketing managers can create campaigns but only in their regional workspace

Permission Usage vs Resource Access

How we measure

Permission Usage

A permission is an action/resource type combination. If you can read documents, that counts as one permission, whether you can access 1 document or 1,000 documents. The metric tracks whether you exercise the capability at all, not how many individual resources you touch.

For example:

If you have read access to 500 opportunities in your CRM but only view 1 opportunity in 90 days, you've used 100% of that permission.

How we measure

Resource Access

Resource utilization measures how many individual resources get accessed out of the total available. This metric exposes the gap between the access you grant and the resources people actually need.

For example:

If your CRM contains 500 opportunities and users access 10 of them in 90 days, then 2% of the resources have been accessed (10 accessed opportunities / 500 total opportunities).

At 1Password, we’re seeing the same pattern OSO highlights as teams start putting AI agents into real production workflows. Access models built for humans don’t map cleanly to agents. When agents are handed broad, static permissions, the unused ones don’t just sit there, they quietly expand the attack surface. What teams need instead are identity systems that keep agent actions tightly scoped and explicitly tied back to human intent, so they can move fast without creating risk they didn’t mean to take.

Nancy Wang
CTO at 1Password
01

Corporate workers utilize less than 4% of the access that they have

Organizations intentionally grant workers broad access to avoid blocking productivity.  Employees might hold access to hundreds of capabilities—edit all opportunities, delete customer records, export financial data. They use a fraction of them.

Key Findings

What this means

Organizations intentionally grant workers broad access to avoid blocking productivity.  Employees might hold access to hundreds of capabilities—edit all opportunities, delete customer records, export financial data. They use a fraction of them.

A sales operations manager holds permissions to:

View all opportunities (500 in the system)

Edit all opportunities

Delete all opportunities

Export opportunity data

Share opportunities with external users

Modify territory assignments

Create custom reports

Delete reports

Over 90 days, she views 15 opportunities and edits 3. She uses 2 of her 8 permission types (25% of her permissions). The other 6 sit unused but ready—for her. An AI agent with identical access operates differently.

what-is-oso-diagram
02

Sensitive data is broadly available

As described above in our methodology and definitions section.

Key Findings

What this means

Access to sensitive data spreads far beyond the people who need it. Organizations grant broad permissions to avoid blocking legitimate work. Most of that access never gets used.

Your customer success platform contains 50,000 customer records with PII

Your CS team has 200 people. Each person can access all 50,000 records. Over 90 days:

150 team members log in (75% of the team)

Those 150 people access 4,000 unique customer records combined

46,000 records (92%) remain untouched

All 200 team members still hold access to all 50,000 records

The access exists for flexibility. An agent with the same access doesn't need flexibility—it needs precision.

what-is-oso-diagram
03

Destructive Permissions Are Everywhere

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.

Key Stats

What this means

More than one-third of users hold permissions that could cause irreversible damage. Most will never use them. An agent might.

Your engineering team uses a project management tool

40 engineers have accounts. Each engineer can:

Delete tasks

Delete projects

Modify project timelines

Export all project data

Over 90 days, 2 engineers delete tasks (cleanup work). 1 engineer exports data (board snapshot for a presentation). The remaining 37 engineers never touch these capabilities. But if you deploy an agent with typical engineer permissions—"help me organize my tasks"—that agent inherits delete access to every project in the system.

what-is-oso-diagram

Humans vs Agents

Why Overpermissioning Matters Now And How to Safely Manage Agent Permissions

Overpermissioning is a human-era artifact. We’ve accepted over-permissioning in SaaS applications because humans are fundamentally limited by time and can generally apply judgment. A malicious human actor is more likely to be intercepted before causing massive damage.

Human-Driven vs Agent-Driven Systems
Speed of Interaction
Slow, task-based; limited exploration of the system
Machine-speed; exhaustive exploration of available actions
Shadow Escape
Zero-click data exfiltration exploit
Likelihood of Enumerating Permissions
Low — most users don’t even know what permissions they have
High — agents routinely check and test all available capabilities
ServiceNow
Agent privilege escalation
Response to Unexpected System Behavior
Intuition + context: users self-correct
No intuition: agents repeat or escalate failed attempts
Replit
Agent deletes production database
Error containment
Human errors are localized
Agent errors can propagate quickly across resources
Google Antigravity
Agent accidently deletes user storage
THE CHALLENGE
Permissions Designed for Humans Break Under Agent Automation
AI tools with excessive permissions have deleted production databases and wiped laptop drives. Organizations intentionally grant broad access to support human workflows, but those same permission models create critical risks when transferred to AI agents that operate at machine speed without human judgment constraints.

Security Prevents Companies from Deploying Agents

Oso's customer base consists of organizations with some of the most advanced authorization systems in the market. Even in these environments, where access controls are intentionally designed to grant broad permissions to support human workflows, the findings reveal significant gaps when those same permissions transfer to AI agents.
The current approach
Grant agents the same broad permissions employees hold, then restrict them so tightly they become useless. Or keep them in beta indefinitely.
The alternative
Implement simulations, visibility, alerts, and controls. Oso for Agents automates these capabilities, allowing organizations to grant agents precisely the permissions they need for specific tasks.

The authorization challenge for agents isn't new; it's the same problem security has dealt with for decades, just massively accelerated. We need partners who understand this deeply. At Brex, we're pushing agents into production fast, and working with Oso gave us the foundation to do that without creating new security risks.

Mark Hillick
CISO of Brex

Meet with the Oso Founder

If you want to run powerful agents safely, you need the right guardrails in place. To learn more about agentic security and how Oso can help, book a meeting with the Oso team.