
AI is quietly redrawing attack boundaries around global identity fabrics.
The accelerated use of AI throughout global enterprises is introducing a wave of generative AI tools and a bevy of AI agents, each with its own non-human identity.
NHIs already vastly outnumber human users 10:1, according to Microsoft, with that ratio trending toward a 100:1 ratio as agentic and workload identities proliferate.
How are these new “coworkers” affecting your cyber resilience?
Each new agent, service principal, and low-code “helper” becomes another potential entry point to identity systems.
Plus, AI support agents are often overpermissioned in ways that may have unintended consequences—such as “helpfully” reconfiguring security settings or granting access in ways that can lock entire teams out of their identity systems or punch holes in corporate VPNs.
When those same agents sit on local machines with access to SSH keys, password managers, and browser sessions, an attacker who compromises the endpoint—or socially engineers the agent—can simply ask, “What secrets are on this machine?” and let the agent enumerate credentials and vulnerabilities at machine speed.
Combined with the evidence that most permissions in identity systems are unused or overpermissioned—and that 80% of workload identities are effectively abandoned but still retain access—the ground is fertile for “zombie” agents and shadow NHIs that attackers can quietly hijack.
In an agentic world, identity sprawl isn’t just a hygiene problem; it is the front line of the attack surface.
Learn more about how to prevent, detect, and respond to identity-based attacks.