AI Agents Gone Rogue

We’re tracking the latest agentic failures, exploits, and emergent attack patterns so you can understand where risks exist and how to mitigate them.

Uncontrolled Agents

Agents take unsafe or unintended actions on their own. Non-determinism, misunderstood instructions, or over-broad tool access leads to data leaks, deletions, or system-wide changes, often executed in seconds.
Amazon Service Was Taken Down by AI Coding Bot
Issue

An AWS engineer allowed Amazon’s AI coding tool (Kiro, and previously Amazon Q Developer) to autonomously resolve a production issue without required peer approval, operating under broader-than-expected permissions.

Impact

The December incident caused a 13-hour interruption to an AWS cost-exploration system, affecting a single service in parts of mainland China; it was the second AI-assisted production disruption in recent months.

Resolution

AWS attributed the outage to a user access control failure rather than AI autonomy, and implemented additional safeguards including mandatory peer review, tighter permission controls, and additional training.

Moltbook Database Breach
Issue

The viral Moltbot AI assistant and its associated Moltbook agent social network raised security concerns due to agents being granted broad access to files, credentials, and external services while interacting with untrusted content and other agents, creating new paths for data exposure and manipulation.

Impact

Agents with persistent memory, system access, and external communication capabilities increased the risk of private data leakage, delayed prompt-based attacks, and unintended coordination between agents, potentially amplifying security failures at scale.

Resolution

The incident primarily resulted in warnings from researchers and security teams, with recommended mitigations including limiting permissions, reducing autonomous access to sensitive systems, and strengthening safeguards around agent memory and external communication.

Persistent Memory Delayed Attacks
Issue

Palo Alto Networks warned that Moltbot highlights a new class of risk where autonomous agents combine broad system access, exposure to untrusted content, and external communication, creating conditions where agents can unintentionally leak data or execute harmful actions without direct exploitation.

Impact

Agents operating with persistent memory and high privileges increase the risk of delayed attacks, data exfiltration, and large-scale security failures, especially as agents interact with other agents and external systems beyond traditional security visibility.

Resolution

The report emphasizes governance and architectural controls rather than a single fix, recommending tighter permission boundaries, stronger monitoring, and security models designed specifically for autonomous agents instead of traditional application defenses.

Accidental Data Deletion During Setup
Source:
Jan 2026
Issue

The viral AI assistant Moltbot (formerly Clawdbot) gained rapid adoption despite being granted broad access to users’ accounts, files, and services, raising concerns that highly autonomous agents with persistent memory and system access could behave unpredictably or expose sensitive data without sufficient safeguards.

Impact

Users allowing the agent to manage personal or business workflows risked privacy exposure, unintended actions, and data leakage, as the assistant could automate tasks across connected systems with limited oversight or security controls.

Resolution

The incident primarily resulted in increased awareness rather than a single fix; researchers and developers emphasized limiting permissions, maintaining human oversight, and avoiding granting full account access to autonomous agents without stronger guardrails.

Mass Credential Exposure via Shodan
Issue

Security researchers found that Moltbot (formerly Clawdbot), an agentic personal assistant with broad system and account access, could expose sensitive data due to misconfigurations, insecure defaults, and supply-chain risks in its skills ecosystem, including publicly exposed instances and unmoderated downloadable skills.

Impact

Exposed or compromised instances could allow attackers to access private messages, credentials, API keys, and connected services, effectively turning the agent into a backdoor capable of ongoing data exfiltration or command execution.

Resolution

Some configuration and authentication issues were addressed after disclosure, but researchers emphasized stronger access controls, least-privilege permissions, and secure deployment practices to reduce exposure when running agentic systems.

User's D-Drive Erased by Google Antigravity's Turbo Model
Issue

While using Google Antigravity in “Turbo” mode (automatic command execution), the agent wiped the entire content of the user’s D-drive while attempting to clear the project cache

Impact

User lost full D-drive contents. Other users report similar issues

Resolution

User advised others to exercise caution running Antigravity in Turbo mode as this enables the agent to execute commands without user input or approval

Security Flaw in Asana Exposes User Projects Across Domains
Issue

A bug in Asana’s MCP server allowed users from one account to access “projects, teams, tasks, and other Asana objects” from other domains

Impact

Cross-tenant data exposure risk for all MCP users, though no confirmed exploit; customers were notified and access suspended

Resolution

The MCP server was taken offline, the code issue was fixed, affected customers were notified, and logs/metadata were made available for review.

Replit's AI Assistant Ignored Instructions Causing Major Data Loss
Source:
Jul 2025
Issue

Replit’s AI coding assistant ignored the instruction not to change any code 11 times, fabricated test data, and deleted a live production database.

Impact

Trust damaged; user code at risk; public apology by CEO

Resolution

Product enhancements with backups in place and one-click restore launched

Tricked Agents

Attackers manipulate agents through poisoned inputs, crafted content, or malicious web pages. A single prompt turns into data exfiltration, privilege misuse, or chained tool abuse.
Claude Cowork Exfiltrates Files
Issue

Researchers found that Claude Cowork, Anthropic’s general-purpose AI agent, can be tricked via indirect prompt injection into uploading user files to an attacker’s Anthropic account by abusing a known isolation flaw and the agent’s file/network access.

Impact

An attacker can exfiltrate sensitive user files (including documents with financial details or PII) without explicit user approval once Cowork has been granted folder access, exposing organizations to data theft and confidentiality breaches.

Resolution

The vulnerability was publicly demonstrated; mitigations focus on restricting file access and strengthening prompt sanitization, though no formal fix has been confirmed — prompting warnings that users should avoid granting access to sensitive files and security teams should harden agent permissions.

Copilot's No-Code AI Agents Liable to Leak Company Data
Issue

Microsoft Copilot Studio no-code AI agents were shown to be vulnerable to prompt injection, allowing attackers to override instructions and extract sensitive corporate data or trigger unintended actions.

Impact

This exposed organizations to customer data leakage, unauthorized workflow changes, and financial risk, especially since no-code agents can be widely deployed without strong security oversight.

Resolution

Researchers recommended input filtering, stricter access controls, least-privilege permissions, and sandboxing to reduce agent abuse and limit data exposure.

ServiceNow Vulnerability: Low-Privileged Agent Misled into Data Breach
Issue

Attackers exploited ServiceNow Now Assist agent-to-agent collaboration + default config to trick a low-privileged agent into delegating malicious commands to a high-privilege agent, resulting in data exfiltration.

Impact

Sensitive corporate data leaked or modified; unauthorized actions executed behind the scenes

Resolution

ServiceNow updated documentation and recommended mitigations: disable autonomous override mode for privileged agents, apply supervised execution mode, and segment responsibilities

Antigravity Breach: Web Page Tricks Agent into Stealing User Data
Issue

Google Antigravity data-exfiltration via prompt injection. A “poisoned” web page tricked Antigravity’s agent into harvesting credentials and code from a user’s local workspace, then exfiltrating it to a public logging site.

Impact

Sensitive credentials and internal code exposed; default protections (e.g. .gitignore, file-access restrictions) bypassed.

Resolution

The vulnerability has been publicly disclosed by researchers. PromptArmor and others highlight the need for sandboxing, network-egress filtering, and stricter default configurations.

Shadow Escape: A Zero-Click Exploit Threatens Major AI Platforms
Issue

A “zero-click” exploit called Shadow Escape targeted major AI-agent platforms via their MCP connections. Malicious actors abused agent integrations to access organizational systems.

Impact

Agents inside trusted environments were silently hijacked, bypassing controls. Because it exploited default MCP configs and permissions, the potential blast radius covered massive volumes of data.

Resolution

Initial remediation advice included auditing AI agent integrations, enforcing least privilege, and treating uploaded documents as potential attack vectors.

Notion AI's Web Search Tool: A Risk for Private Data Exfiltration
Issue

Researchers demonstrated how the web-search tool in Notion’s AI agents could be abused to exfiltrate private data via a malicious prompt.

Impact

Confidential user data from internal Notion workspaces could be exposed to attackers

Resolution

Notion declared the vulnerability and announced a review of tool permissions and integrations.

Supabase Vulnerability: Prompt Injection Exposes Private Data
Issue

Supabase MCP data-exposure through prompt injection. The agent used the service_role key and interpreted user content as commands, allowing attackers to trigger arbitrary SQL queries and expose private tables.

Impact

Complete SQL database exposure. All tables became readable. Sensitive tokens, user data, internal tables at risk.

Resolution

Public disclosure by researchers. Calls for least-privilege tokens instead of service_role, read-only MCP configuration, and gated tool access through proxy/gateway policy enforcement.

GitHub MCP Server Vulnerability: Attackers Exploit AI to Steal Private Code
Issue

A prompt-injection flaw in GitHub’s MCP server lets attackers use AI agents to access private repos and exfiltrate code.

Impact

Private code, issues, and sensitive project data could be exposed via public pull requests.

Resolution

Organizations were advised to limit agent permissions, disable the integration, and apply stricter review of tokens.

Weaponized Agents

Unlike agents that are tricked into unsafe actions, weaponized agents are created to be dangerous. Backdoors, poisoned workflows, or hostile training make them behave as purpose-built attack bots, executing targeted intrusions or exfiltration on command.
5-Minute Email Forwarding Attack
Issue

During the rapid rebrand from Clawdbot to Moltbot, attackers exploited confusion around account changes and project identity, hijacking social accounts and launching fake crypto tokens while impersonating the project and spreading malicious copies.

Impact

The incident led to financial losses from scam tokens, reputational damage to the project, and exposure of users to malicious software and insecure agent deployments during a period of rapid adoption and unclear trust signals.

Resolution

The developer publicly denied involvement, warned users of scams, and completed the rebrand while encouraging users to verify official sources and avoid unofficial tools or tokens associated with the project.

Chinese Hackers Automate 90% of Global Cyber Espionage with Advanced Tools
Issue

A Chinese state-sponsored group abused Anthropic Claude Code and MCP tools to automate ~80–90% of a multi-stage agentic cyber espionage operation across ~30 global organizations.

Impact

Successful intrusions and data exfiltration at a subset of tech, finance, chemical, and government targets; first widely reported large-scale agentic AI-orchestrated cyberattack.

Resolution

Anthropic detected the activity, banned attacker accounts, notified affected organizations, shared IOCs with partners, and tightened safeguards around Claude Code and MCP use.

AI Agents at Risk: Just 2% Poisoning Can Trigger Malicious Behavior
Source:
Oct 2025
Issue

Malice in Agentland study found attackers could poison the data-collection or fine-tuning pipeline of AI agents . Even with as low as 2% of traces poisoned, embedding backdoors that trigger unsafe or malicious behavior when a specific prompt or condition appears

Impact

Once triggered, agents leak confidential data or perform unsafe actions with a high success rate (~80 %). Traditional guardrails and two standard defensive layers failed to detect or block the malicious behavior.

Resolution

The study raises alarm across the community; calls for rigorous vetting of data pipelines, supply-chain auditing, and end-to-end security review for agentic AI development

Submit an incident

Help us keep this registry complete and up to date

If you’re aware of a publicly documented agent-related breach we haven’t captured, share it below. We’ll review and add it to the registry.

Thanks for contributing!
We’ve received your incident report and will review it shortly. If we need additional details, we’ll reach out using the email provided.
Something went wrong while submitting the form. Please check your information and try again.
Prefer to contribute anonymously?

Tell us how an AI agent went rogue—anonymously, so it can’t come after you later. We will review and add it to the registry.

Thanks for contributing!
We’ve received your incident report and will review it shortly.
Something went wrong while submitting the form. Please check your information and try again.

Next Steps

If you want to run powerful agents safely, you need the right guardrails in place. To learn more about agentic security and how Oso can help, book a meeting with the Oso team.