Over-privileged AI tied to 4.5 times higher incident rates, study finds

March 21, 20262 min read2 sources
Share:
Over-privileged AI tied to 4.5 times higher incident rates, study finds

Organizations that give AI systems more access than they need are reporting far more security incidents, according to a Teleport study covered by Infosecurity Magazine. The survey found that companies running “over-privileged” AI had a 76% incident rate and saw incidents at 4.5 times the rate of organizations with tighter controls.

The report focuses on enterprise AI assistants, copilots and agents connected to internal tools and infrastructure. The risk is not simply model error. It is what happens when an AI system has broad access to cloud environments, source code, secrets, internal databases or admin functions. In those setups, prompt injection, tool abuse, stolen credentials or unsafe automation can turn a bad output into a real security event.

The findings add to a growing body of guidance warning companies not to treat AI like a low-risk productivity tool once it can take actions inside corporate systems. Security teams have been pushing for least-privilege access, short-lived credentials, approval gates for sensitive actions and detailed logging. Those controls matter even more for agentic AI, which can operate at machine speed and across multiple connected services.

Teleport’s data should be read with some caution. The figures come from a vendor-backed survey, not a public breach dataset, and the summary report does not fully answer key questions such as how “over-privileged” was defined, what qualified as an “incident,” or how large the respondent pool was. That means the study shows a strong correlation, but not proof that broad AI permissions directly caused every incident.

Even so, the message is clear: the old identity and access management problem is now showing up in AI deployments. A chatbot with read-only access is one thing. An AI agent with the keys to production is another. For organizations rolling out internal AI tools, the safer default is narrow permissions, segmented environments and human review before high-impact actions are allowed. For staff accessing AI tools remotely, basic protections such as a trusted VPN can help reduce exposure, but they do not solve overbroad permissions inside the enterprise.

Share:

// SOURCES

// RELATED

Most 'AI SOCs' are just faster triage, and that's not enough

Many AI security tools only speed up alert analysis, failing to reduce analyst workload. Experts argue real gains require AI that automates response a

2 min readApr 17

ZionSiphon malware designed to sabotage water treatment systems

A new proof-of-concept malware, ZionSiphon, demonstrates how attackers can sabotage water treatment plants by manipulating industrial control systems.

2 min readApr 17

ThreatsDay bulletin: A deep dive into the Defender 0-day, SonicWall attacks, and a 17-year-old Excel flaw

This week’s threat bulletin is a heavy one. We analyze the critical Microsoft Defender 0-day, a massive SonicWall brute-force campaign, and a 17-year-

6 min readApr 17

Microsoft Defender's 'RedSun' zero-day: A researcher's protest and a threat to Windows systems

A researcher's protest exposed a critical zero-day in Microsoft Defender, allowing attackers full system control. Here's the technical breakdown and h

7 min readApr 17