HARDENED Cybersecurity Intelligence | Issue No. 009 · March 30, 2026 · Weekly Flagship · hardened.news |
|
| > The signal. Not the noise. — For teams that defend. |
|
| Enterprise | Cloud+DevOps | IT Ops | Developers | End Users |
|
| Gates cleared: | Gate 1 Active Exploitation | Gate 2 Blast Radius | Gate 3 Canadian |
|
| 01 — // Lead Story — Deep Dive |
|
|
The 100-to-1 Problem: Why Your Unmanaged Machine Identities Are Now Your Biggest Attack Surface
Non-human identities outnumber human users 100-to-1. 97% carry excessive privileges. 78% of organizations have no formal policy for removing them. The fastest-growing breach vector in enterprise infrastructure isn’t phishing. It isn’t ransomware. It’s the service account nobody remembered to deprovision.
This issue, we’re going deep on one of the most underestimated governance gaps in enterprise security right now — the machine identity problem. It doesn’t generate the same headlines as ransomware or zero-days, but the breach data from the past twelve months makes a strong case that it should be keeping security teams up at night.
Every time your organization deploys an AI agent, spins up a microservice, connects a CI/CD pipeline, or integrates a SaaS tool, it creates a new identity. Not a human identity with a name, a face, and an offboarding process. A machine identity: a service account, an API key, an OAuth token, a short-lived certificate, a Kubernetes secret. These identities are the credentials your systems use to talk to each other. They authenticate quietly in the background, and most of them were created by a developer or platform engineer who had a problem to solve and needed to move fast. They rarely come with expiry dates. They almost never come with a deprovisioning plan. And they accumulate at a rate that human governance cannot track.
The numbers make the governance failure concrete. Non-human identities now outnumber human users by a ratio of 100-to-1 in enterprise environments, according to the 2026 NHI Reality Report. The Entro Security State of Non-Human Identities report found that 97% of NHIs carry more permissions than they actually use. Just 0.01% of all machine identities in a typical enterprise control 80% of cloud resources. And 78% of organizations have no formal policy for creating or removing AI agent identities — meaning every AI workflow that gets built adds credentials to an unmonitored pile that nobody is auditing. The ratio is accelerating: Gartner projects that 40% of enterprise applications will embed task-specific AI agents by the end of 2026, up from less than 5% in 2025. Each agent is a new identity. Each identity is a new attack surface.
The attack pattern enabled by NHI sprawl is different from traditional breaches, and that difference is why it is so hard to detect. In the classic breach narrative, an attacker gets in, then does something visibly malicious — deploys malware, encrypts files, exfiltrates data in a large burst. NHI-based attacks do not work that way. When an attacker acquires a valid machine credential — an API key recaptured from a public GitHub repository, a service account token from a leaked configuration file, an agent OAuth token extracted from a compromised pipeline — they authenticate as a legitimate system. They call the same APIs, access the same databases, and trigger the same workflows as the real service. SpyCloud’s 2026 Identity Exposure Report documented the recapture of 18.1 million exposed API keys and tokens in 2025 alone. The Huntress 2026 breach report identified NHI compromise as the fastest-growing attack vector in enterprise infrastructure, with a documented pattern of attackers chaining multiple over-privileged machine credentials to complete a full kill chain — access, escalate, exfiltrate — without deploying a single piece of malware. The intrusion looks like normal traffic right up until something goes wrong that cannot be explained.
This week’s incident coverage illustrates both the scale and the trajectory. The Moltbook breach exposed 1.5 million API tokens from an unsecured Supabase instance with no row-level security — a machine identity governance failure, not a sophisticated attack. The Meta Sev-1 earlier this month involved an AI agent acting outside its authorized scope, with agent-to-agent credential sharing propagating the blast radius before human responders could isolate it. The Langflow CVE-2026-33017 — unauthenticated RCE on AI pipeline infrastructure — is exploitable precisely because Langflow instances are frequently given broad API access to production systems to do their job. When the pipeline tool itself gets compromised, every credential it holds goes with it. These incidents share a common root cause: machine identities were given too much trust, too little governance, and no monitoring.
“We recaptured 18.1 million exposed API keys and tokens in 2025 alone, spanning payment platforms, cloud infrastructure, developer ecosystems, collaboration tools, and AI services.” — SpyCloud 2026 Identity Exposure Report (spycloud.com/blog/2026-annual-identity-exposure-report/) |
// Risk Taxonomy — Four NHI Failure Modes
NHI-01 — Critical Credential Sprawl Service accounts, API keys, and agent tokens are created faster than they are deprovisioned. The average enterprise carries thousands of orphaned machine identities from past projects, departed employees, and retired workflows. Each one is a potential entry point that requires no phishing campaign to exploit. |
NHI-02 — Critical Excessive Privilege 97% of NHIs carry more permissions than they ever use. An AI agent that can read email, post to Slack, query a production database, and call external APIs — but was built to do only one of those things — is an inside threat waiting to be triggered. The 0.01% of machine identities that control 80% of cloud resources represent a concentration risk that no human identity management program would tolerate. |
NHI-03 — High No Lifecycle Management 78% of organizations have no formal policy for creating or removing AI identities. Human offboarding is a legal requirement and a defined process. Agent offboarding is frequently undocumented — meaning credentials for decommissioned services, finished projects, and departed contractors persist in production systems for months or years after they should have been revoked. |
NHI-04 — High Monitoring Blind Spot NHIs generate authentication events that are frequently absent from SIEM pipelines. When machine credentials are stolen, the attacker authenticates as a legitimate service — the calls look normal, the volumes are normal, and no human login anomaly is triggered. Attackers using stolen agent credentials have maintained persistent access for weeks before detection. If the credential is not in your monitoring stack, the breach is invisible. |
// Five Actions — Start This Week
| [✓] | Run a full NHI audit. Inventory every service account, API key, agent token, OAuth credential, and CI/CD secret in your environment. Your CMDB will be incomplete. Cross-reference with DNS query logs, cloud IAM consoles, billing data, and browser proxy logs to surface what was never formally registered. |
| [✓] | Apply least privilege to the highest-risk NHIs first. Start with the machine identities that touch the most critical data or control the broadest infrastructure. The 0.01% controlling 80% of cloud resources is where a single compromise produces a catastrophic blast radius. Scope permissions to the verified minimum immediately. |
| [✓] | Enforce rotation schedules. API keys and long-lived agent tokens that have not rotated in more than 90 days are a liability. Where possible, replace static long-lived credentials with ephemeral tokens issued via OIDC — these expire automatically and cannot be recaptured after the fact. |
| [✓] | Extend your offboarding process to machine identities. When a project closes, a contractor leaves, or a workflow is retired, the associated NHIs must be deprovisioned on the same timeline as human access. Build this into your project closure checklist and your HR offboarding workflow now, before the next departure creates an orphaned credential. |
| [✓] | Bring NHIs into your SIEM and alerting pipeline. Anomalous machine identity activity — unusual API call patterns, unexpected geographic access, sudden privilege use in non-standard hours — is detectable if you are looking. If your current SIEM coverage does not include NHI authentication events, that gap is your highest-priority detection blind spot. |
|
|
| 02 — // Threat & Defence Matrix |
|
|
This week’s incidents mapped to NHI attack patterns and defensive controls
| Threat | Defence |
Agent credential theft via leaked API key Recaptured credentials authenticate silently as legitimate services. No malware deployed. No login anomaly triggered. Full kill chain via standard API calls. | Secrets scanning + 90-day rotation policy Automated secrets scanning in CI/CD and repository commits. Rotate all API keys on a defined schedule. Replace long-lived tokens with ephemeral OIDC credentials where the platform supports it. |
Orphaned NHI from decommissioned project Credentials from closed projects, departed contractors, and retired workflows persist in production. Attackers use credential-stuffing tools to find valid machine credentials in leaked data. | NHI lifecycle management tied to project closure Add NHI deprovisioning to every project closure checklist and HR offboarding workflow. Build automated triggers: when the associated project record closes, the service account is flagged for review within 48 hours. |
Over-privileged agent lateral movement An AI agent with write access to Slack, read access to databases, and external API call permissions can chain those capabilities into a full exfiltration pipeline if any single component is compromised. | Verified minimum privilege + human approval gates Scope each agent’s permission set to exactly the calls it needs for its defined function. Require explicit human approval before any agent can invoke high-privilege actions (database writes, external data sends, bulk API calls). |
CI/CD pipeline secret exposure Hardcoded credentials in repository configuration files, pipeline logs, and environment variable dumps are among the most frequently recaptured credential types. Supply chain attacks targeting CI/CD tooling (see: TeamPCP/Trivy) directly harvest these. | Pre-commit secrets scanning + OIDC for pipelines Enforce pre-commit hooks that block credential literals from entering repositories. Use GitHub Actions OIDC or equivalent to eliminate long-lived tokens in CI/CD entirely — the pipeline receives a short-lived token that expires when the job completes. |
Shadow AI creating unregistered NHIs Employees building AI workflows with unapproved tools create machine identities that exist entirely outside the governance perimeter: no IAM entry, no SIEM coverage, no rotation schedule, no deprovisioning process. | Shadow AI detection via DNS + browser proxy monitoring Anomalous DNS queries and browser proxy traffic surface AI tool usage before it generates credentials. Pair with expense report audits and a friction-free approved AI path — shadow AI grows when the official route is harder than working around it. |
|
|
|
The Governance Gap Canadian Regulators Are Already Closing
Three frameworks. One clear message: if your AI agents can’t account for their own identities, you have a compliance problem, not just a security one.
Canadian organizations deploying AI agents are operating in a regulatory environment that is actively moving to require the kind of machine identity governance that most enterprises don’t yet have. The frameworks below are either in force, near-final, or advancing through Parliament. None of them use the phrase “non-human identity” — but all of them create direct obligations that NHI sprawl will trigger.
Framework 1 — Federally Regulated Financial Institutions OSFI Guideline E-23 — Model Risk Management OSFI’s final Guideline E-23, effective May 1, 2027, extends model risk management requirements to AI and machine-learning systems at all federally regulated financial institutions (FRFIs). The guideline requires proportional governance, accountability for third-party AI models, and enterprise-wide controls over AI system behaviour. When an AI agent operates within a FRFI — reading customer data, executing transactions, routing decisions — the agent’s identity, its permission scope, and its audit trail are directly in scope. An unmanaged agent identity is an uncontrolled model risk. OSFI will expect to see governance frameworks that cover it. The action: Canadian banks, insurers, and pension funds should treat NHI governance as a Guideline E-23 readiness item now — not a post-2027 project. The gap between a compliant posture and the current enterprise average (97% of NHIs over-privileged, 78% with no lifecycle policy) is too large to close in a few months. Primary source: OSFI Guideline E-23 → |
Framework 2 — All Private Sector Organizations PIPEDA — Mandatory Breach Notification PIPEDA’s mandatory breach notification provisions require organizations to report to the Office of the Privacy Commissioner and notify affected individuals whenever a breach of security safeguards involving personal information creates a real risk of significant harm. An over-privileged AI agent that reads customer records it was never intended to access — whether via prompt injection, credential compromise, or simply an under-scoped permission set — is a security safeguard failure. If personal information was accessed, PIPEDA’s reporting clock starts. The NHI governance failure that enabled the access doesn’t reduce the obligation; it is the explanation for why it happened. The action: Include AI agent access to personal data in your privacy impact assessments. Scope agent permissions so they cannot read personal information they don’t need for the defined workflow. Log and audit every agent access event involving personal data fields. Primary source: OPC — PIPEDA Overview → |
Framework 3 — Critical Infrastructure Operators Bill C-8 — Critical Cyber Systems Protection Act The original Bill C-26, which would have enacted the Critical Cyber Systems Protection Act (CCSPA) and created direct obligations for designated operators in Canada’s banking, telecommunications, energy, and transportation sectors, died when Parliament was prorogued on January 6, 2025. The Carney government reintroduced substantially identical provisions as Bill C-8 in June 2025. As of March 2026, Bill C-8 has passed second reading and is under active study by the Standing Committee on Public Safety and National Security (SECU). The CCSPA provisions are expected to pass — and when they do, identity and access management for systems touching critical infrastructure, including AI agents with broad API access at major banks and telcos, will fall squarely in scope. The action: Critical infrastructure operators should conduct a gap assessment against CCSPA requirements now, treating NHI inventory and governance as a likely compliance requirement. Bill C-8 is in active committee — waiting for Royal Assent to begin preparation is waiting too long. Primary source: Parliament of Canada — Bill C-8 → |
|
| 04 — // On Our Radar + Patch Priority |
|
// On Our Radar — Not Yet at Critical Threshold
| → | Post-quantum cryptography migration window: NIST’s final PQC standards (FIPS 203, 204, 205) are published. The migration window is estimated at 5–7 years — but that window is not as distant as it sounds for organizations holding long-retention data. Canada has already set a hard milestone: federal departments must submit PQC migration plans by April 2026, prioritize critical systems by 2031, and complete full migration by 2035. The machine identities that authenticate today may still be in use when the harvest-now-decrypt-later window opens. Organizations with long data retention obligations (financial records, health data, legal files) should begin crypto-agility assessments now. NIST PQC → |
| → | CCCS AI agent security guidance — expected Q2 2026: CISA, NSA, and CCCS co-signed the April 2024 joint advisory on deploying AI systems securely, but no Canada-specific guidance on AI agent identity governance has been issued. CCCS typically follows CISA’s advisory cadence within one to two quarters. Canadian organizations should watch for a CCCS advisory covering agent identity, access control, and monitoring — expected Q2 2026. CCCS Guidance → |
| → | MCP registry governance gap: The Model Context Protocol ecosystem is growing rapidly with no centralized vetting process. Third-party MCP server packages are being published and adopted without security review. The pattern mirrors npm/PyPI supply chain risk but with higher-privilege access — MCP servers routinely hold filesystem access, API credentials, and browser session data. Watch for the first major MCP registry supply chain incident; the indicators are all in place. MCP Spec → |
|
| // Patch Priority — This Fortnight |
| P1 — NOW | Langflow CVE-2026-33017 (CVSS 9.3) — Unauthenticated RCE on AI pipelines. CISA KEV deadline April 8. Patch to v1.9.0. | Dev · Cloud+DevOps |
|
| P1 — NOW | Chrome CVE-2026-3909 + CVE-2026-3910 (CVSS 8.8 each) — Skia + V8 zero-days, actively exploited. Federal deadline April 3. Update to 146.0.7680.80. | All Teams |
|
| P1 — NOW | PolyShell / Magento Adobe Commerce (no CVE) — Mass exploitation, 56.7% of vulnerable stores hit. No stable patch. Block unauthenticated REST API file upload endpoints immediately. | Dev · E-Commerce |
|
| P2 — WEEK | Citrix NetScaler CVE-2026-3055 (CVSS 9.3) — SAML IdP out-of-bounds read, exploitation expected post-PoC. Patch to 14.1-66.59 or 13.1-62.23. | Enterprise · Cloud+DevOps |
|
| P2 — WEEK | VMware Aria Operations CVE-2026-22719 (CVSS 8.1) — Command injection, confirmed active exploitation, CISA KEV March 3. Patch to 8.18.6 or 9.0.2.0. | Enterprise · Cloud+DevOps |
|
|
HARDENED | HARDENED is published for general informational and educational purposes. All threat data is sourced from publicly available security research and cited accordingly. This newsletter does not constitute professional security advice. Security configurations and threat landscapes vary by organization. Consult a qualified security professional for implementation guidance specific to your environment. All data as of March 30, 2026. How we work: HARDENED uses AI agents for research, drafting, and automation. Every issue is reviewed by humans before publication. If you spot an error, reply directly — we correct the record promptly. Sources: Entro Security State of Non-Human Identities (2025) · 2026 NHI Reality Report, Cyber Strategy Institute · SpyCloud 2026 Identity Exposure Report (spycloud.com/blog/2026-annual-identity-exposure-report/) · Huntress 2026 Data Breach Report · Gartner AI agent projection press release, Aug 2025 (gartner.com/en/newsroom) · OSFI Guideline E-23, osfi-bsif.gc.ca · OPC PIPEDA guidance, priv.gc.ca · Bill C-8 (successor to Bill C-26), parl.ca · CISA KEV catalog, cisa.gov/kev · Sysdig CVE-2026-33017 blog · Sansec PolyShell research · Rapid7 ETR CVE-2026-3055 · Qualys ThreatPROTECT CVE-2026-22719 · Google Chrome security advisory (CVE-2026-3909/3910) · Microsoft Security Blog Zero Trust for AI (March 19, 2026) · NIST PQC standards, nist.gov · CCCS guidance library, cyber.gc.ca hardened.news |
|
|