For years, the logic of enterprise identity and access governance rested on a simple assumption: humans are the weakest link. If something went wrong, it was usually because someone clicked the wrong link, reused a password, or fell for a well-crafted phishing email.
As a result, entire security strategies were built around fixing human behaviour. We trained staff relentlessly, ran phishing simulations, rolled out password managers, then layered on zero trust, privileged access management and ever more sophisticated controls designed to protect people from themselves. The assumption was simple: secure the human, and you secure the organisation. Today, stopping here is no longer sufficient.
Imagine a company with 100 employees and, traditionally, 100 identities to protect. Now what if that same organisation was to take on thousands or even millions identities almost overnight? This isn’t a hypothetical. Across enterprises, API keys, tokens, certificates and cryptographic secrets are being spun up, allowing systems to talk to each other, act autonomously and make decisions at machine speed. As a result, the dynamic has shifted and today humans are no longer the weakest link. Non-human identities (NHIs) are.
When Every Agent Needs an Identity: The Scale of the Problem
AI agents represent a fundamental shift from automation to autonomy. Traditional automation followed scripts. AI agents pursue objectives within the permissions they’re given, choosing their own paths to get there. In doing so, they operate as delegated digital actors, each requiring its own cryptographic identity.
The scale is hard to overstate. In large enterprise environments, non-human identities already outnumber human ones by as much as 40,000 to one. Yet while human users are closely monitored, trained and governed, machine identities often exist in the shadows, poorly inventoried and rarely rotated.
Unsurprisingly, attackers have noticed. According to the 2025 Verizon Data Breach Investigations Report, credential abuse is now the top initial access vector, involved in 22% of breaches. The message is blunt: attackers are not breaking in, they are logging in.
ALSO READ: Who Is Held Accountable When AI Agents Fail?
Silent Residency: What Happens When Compromised Agents Blend into Legitimate Traffic
One of the most worrying shifts enabled by AI agents is the move away from noisy, smash-and-grab attacks towards silent, long-term residency. AI-powered attackers don’t get tired. They can probe APIs and authentication flows continuously, testing credentials, mapping permissions and looking for misconfigurations at a pace no human team could match. When they find an orphaned API key or a forgotten service account, they don’t need malware. They simply authenticate.
Once inside, compromised machine credentials allow attackers to blend seamlessly into legitimate system traffic. There are no suspicious logins from unusual locations, no panicked employees reporting odd behaviour. From the defender’s perspective, everything looks normal, because the attacker is using exactly the same credentials the system expects.
This is where stealth exfiltration becomes so dangerous. Data doesn’t leave in dramatic bursts. It trickles out quietly, via authorised paths, signed with trusted keys.
Long-Lived Credentials: Why Static Keys Are the Achilles Heel of Agent Infrastructure
A major contributor to this problem is the continued reliance on long-lived machine credentials. Many AI agents depend on static keys that are rarely rotated, if they are rotated at all. In fast-moving environments, credentials are created for short-term projects and never properly retired.
The result is a vast, unmonitored credential graveyard. Governance and infrastructure teams often lack a real-time inventory of which keys exist, what they’re connected to, or whether they’re still in use. That visibility gap gives attackers exactly what they want: credentials that still work, but that no one is watching.
The Governance Gap: Why Agent Deployment Is Outpacing Agent Security
Of course, none of this is happening in a vacuum. Organisations are under intense pressure to deploy AI faster. Board-level oversight of AI has surged, with disclosures among S&P 500 companies increasing by more than 84% between 2023 and 2024. Speed has become a strategic imperative.
But speed without guardrails is not innovation, it’s risk. The SandboxAQ AI Security Benchmark Report 2025 found that only 6% of organisations have reached an AI-native security posture, with protections integrated across both IT and AI systems. That leaves the vast majority running AI agents on top of identity foundations that were never designed for autonomy at scale.
We have seen this pattern before. From missed patches at Equifax to stolen signing keys in the Storm-0558 incident, the lesson is consistent: credentials are treated as an afterthought, until they aren’t.
ALSO READ: Why AI Governance Is Becoming a Board-Level Issue for Multinationals
Governing at Machine Speed: What Agent-Ready Security Looks Like
Defending against agent-driven threats requires a mindset shift. Human-speed processes cannot govern machine-speed systems.
Security policies must be defined programmatically, so agents can act quickly within safe boundaries rather than waiting for manual approval. Runtime controls that inspect prompts, inputs, outputs and data flows allow organisations to protect sensitive information without strangling innovation. This is how security moves from “block by default” to “safe by design.”
Equally critical is real-time visibility. Organisations need a continuously updated inventory of every non-human identity, mapped to where it’s used and what it can access. Without that, defenders are always reacting after the fact.
Finally, isolation matters. Secrets should never be exposed directly to applications or agents. Trust needs to be anchored in hardened cryptographic services that can authenticate and sign without ever releasing the underlying keys.
The New Weakest Link Isn’t Human — It’s the Identity Layer
The enterprise security battleground has shifted. It is no longer centred on careless employees, and it is not yet about runaway artificial general intelligence. It sits squarely in the identity layer, grounded in the reality that non-human identities now outnumber us by tens of thousands to one.
Attackers already understand this. Until defenders catch up, organisations aren’t being breached, but slowly bled dry.
ALSO READ: 6 Enterprise Tests to Expose Hidden AI Compliance Risks Across Borders