This website uses cookies

Read our Privacy policy and Terms of use for more information.

HARDENED
Cybersecurity Intelligence
Daily Briefing  ·  Thursday, April 23, 2026  ·  hardened.news
>  The signal. Not the noise.    For teams that defend.
Lead Story
New Release — AI Agent Governance · Open Source · MIT LicenceEnterprise · Cloud+DevOps · Dev
Microsoft Open-Sources the Agent Governance Playbook. Sub-Millisecond Policy Enforcement Across Every Major Framework — No More Security-vs-Speed Trade-Off.
Eighty-eight per cent of organizations reported confirmed or suspected AI agent security incidents in the last year. Only 14.4 per cent of AI agents reach production with full security and IT approval. Microsoft’s Agent Governance Toolkit — released April 2 under the MIT licence — is the first open-source answer to that gap built around something most governance tools fail to deliver: enforcement that does not slow the agent down.

The governance problem with AI agents is not that organizations lack policies. Most have them. The problem is that those policies live in documents, not runtimes — enforced by humans reviewing deployment plans, not by systems intercepting agent actions before they execute. That gap is why 88 per cent of organizations reported confirmed or suspected AI agent security incidents in the last twelve months (Gravitee State of AI Agent Security 2026), and why only 14.4 per cent of agents go live with full security and IT sign-off. The toolkit Microsoft published on April 2 under the MIT licence at microsoft/agent-governance-toolkit is a seven-package system designed to move policy enforcement from document to runtime — and to do it without the latency penalty that has made governance a deployment bottleneck. Measured p99 enforcement latency: under 0.1 milliseconds.

The architecture spans seven packages. Rather than a central policy service that agents query, Agent OS enforces limits at each action boundary: every action fires against YAML, OPA Rego, or Cedar policies in a stateless evaluation loop, with measured p99 latency under 0.1 milliseconds — making runtime enforcement practical in workloads where a millisecond matters. Agent Mesh solves the trust measurement problem that cryptographic identity alone cannot: each agent carries a decentralised identifier with Ed25519 keys, but its 0–1,000 trust score is earned through observed behaviour over time via the Inter-Agent Trust Protocol, across five behavioural tiers. When an agent deviates from its established pattern, the score decays automatically — no human has to notice the change for trust restrictions to kick in. Agent Runtime provides execution isolation: agents run in defined rings with bounded resource access, saga orchestration handles multi-step operation rollback on failure, and a kill-switch mechanism supports emergency termination without waiting for the current task to complete. Agent SRE brings site reliability engineering discipline to agent operations: SLOs, error budgets, and circuit breakers define acceptable performance boundaries, with progressive delivery pipelines and chaos engineering modules for controlled validation before full deployment. Agent Compliance automates regulatory mapping against the EU AI Act, HIPAA, and SOC2, collects evidence against all ten OWASP Agentic Top 10 risks, and produces audit artefacts as a byproduct of normal operation. Agent Marketplace governs the plugin supply chain: Ed25519 code signing and trust-tiered capability gating mean an agent below a trust threshold cannot load high-privilege plugins regardless of orchestrator instruction. Agent Lightning closes the reinforcement learning governance gap most frameworks leave open — policy-enforced runners govern the training pipeline, with reward shaping subject to the same policy controls as production agent actions.

The toolkit is framework-agnostic by design, with integration points across a range of orchestration systems including LangChain callback handlers, CrewAI task decorators, Google ADK’s plugin system, Microsoft Agent Framework middleware, the OpenAI Agents SDK, AutoGen, LlamaIndex, Dify, PydanticAI, Microsoft Foundry Agent Service, Haystack, and LangGraph. Packages are available in Python, TypeScript, Rust, Go, and .NET. For Canadian enterprises navigating the OSFI Guideline E-23 readiness window — which requires proportional model risk management for AI components used by federally regulated financial institutions by May 1, 2027 — Agent Compliance’s automated evidence collection provides a concrete starting point for the audit trail E-23 will require. The CCCS has not yet published AI agent security guidance despite its expected Q2 2026 window; this toolkit is the most complete publicly available reference implementation until that guidance arrives.

→ Key Takeaway
Enterprise and Cloud+DevOps teams deploying AI agents: The governance question most deployments cannot currently answer is: what stops an agent from taking an unauthorized action at the moment it tries? If the answer is a deployment review process or an acceptable use policy, that is a document, not a control. Runtime enforcement has historically been deprioritized because it adds latency — a real operational cost in agent workloads where response time matters. The Microsoft Agent Governance Toolkit removes that specific objection. Clone the repository, run Agent OS in audit-only mode against your highest-privilege agent environment, and review what it would have blocked. That list is your current runtime governance exposure. The sub-0.1ms p99 enforcement claim is worth testing against your own workloads; if it holds, the performance argument against runtime governance is gone and the remaining gap is one of will. For OSFI E-23 readiness, Agent Compliance’s automated evidence collection against the OWASP Agentic Top 10 provides a concrete starting point for the audit trail the guideline requires by May 2027. HARDENED does not endorse or recommend specific vendors. Tools are listed for awareness only.
Quick Hits
01
The 92% Blind Spot: New Survey Data Shows AI Identities Are Living Inside Core Business Systems With No Governance in Sight

Ninety-two per cent of organizations cannot account for every AI identity running inside their environments — that is the headline finding from joint Saviynt and Cybersecurity Insiders research published this week. Seventy-one per cent of CISOs confirm AI tools access core business systems — Salesforce, SAP, and equivalents — while only 16 per cent govern that access effectively. Eighty-six per cent have no formal access policies for AI identities at all. Seventy-five per cent have already found unsanctioned AI tools running inside their environments. The sharpest number: 95 per cent of respondents doubt they could detect or contain AI misuse, and only 5 per cent of security leaders feel confident they could contain a compromised AI agent. These numbers describe an organization that has granted access and lost track of where it went. The fix is not a new product category — it is applying the identity governance discipline that already exists for human accounts to AI accounts, systematically, before an incident forces it.

Intel — Action Required · AI Identity RiskEnterprise · Cloud+DevOps
02
Microsoft Emergency Patch — CVE-2026-40372 (CVSS 9.1): Forge Any Auth Cookie, Gain SYSTEM — Patching Is Not Enough Without Key Ring Rotation

Microsoft released an out-of-band emergency patch this week for CVE-2026-40372 (CVSS 9.1), an improper verification of cryptographic signature in ASP.NET Core Data Protection. The flaw affects Microsoft.AspNetCore.DataProtection versions 10.0.0 through 10.0.6 on Linux and macOS — Windows deployments are not affected. An unauthenticated remote attacker can forge authentication cookies to gain SYSTEM-level privileges over the network. The operational detail that matters most: updating to 10.0.7 stops new forged tokens from being accepted, but any forged tokens issued during the vulnerable window remain valid until the DataProtection key ring is explicitly rotated. Patching without rotating does not fully close the exposure. Teams running ASP.NET Core 10 on Linux or macOS should update to 10.0.7 and rotate the DataProtection key ring immediately. Review authentication logs for anomalous session patterns since the version 10.0.0 release as an indicator of prior exploitation.

Critical — CVSS 9.1 · Emergency PatchDev · Cloud+DevOps · Enterprise
CVE Watch
CVE Watch
CVE-2026-33032 (nginx-ui “MCPwn”): Two HTTP Requests to Full Nginx Takeover — MCP Endpoints Exposed Without Authentication, 2,600+ Instances Confirmed Active

Pluto Security named this one MCPwn, and the name earns it. nginx-ui — a widely deployed web-based Nginx management interface — ships with a Model Context Protocol integration that exposes a /mcp_message endpoint without authentication. The endpoint’s IP allowlist defaults to empty, which nginx-ui interprets as allow-all — meaning any network-reachable attacker has unrestricted access to all 12 privileged MCP tools from the moment of deployment. Two HTTP requests are sufficient to achieve full Nginx server control: no credentials, no session token, no exploit chain required. Recorded Future confirmed active exploitation as of April 13, 2026, with Shadowserver tracking more than 2,600 exposed instances. The connection to today’s lead is direct: this is what unauthenticated MCP deployment looks like at production scale. It is a different class of problem from the protocol-level architecture gaps Hardened covered in Issue No. 014 — this is a specific product shipping a specific integration without enforcing the authentication controls the protocol requires. CVE-2026-33032 first ran in HARDENED Issue No. 025 (April 21) as a CVE Watch entry; today’s entry provides expanded operational context on the authentication design flaw and remediation priority for teams that have not yet patched. Patched in v2.3.4 (March 15); the vendor recommends updating to v2.3.6. If you are running nginx-ui and have not updated, treat the MCP endpoints as actively reachable until you verify otherwise.

Vendor: nginx-ui  ·  CVE: CVE-2026-33032  ·  CVSS: 9.8 Critical  ·  Affected: nginx-ui < v2.3.4  ·  Fix: v2.3.6  ·  Exploitation: Confirmed in wild (Recorded Future, April 13, 2026)
Compliance Tip of the Day
NIST CSF 2.0 — GV.RM — Govern: Risk Management Strategy
If Your Risk Appetite Statement Does Not Cover Autonomous Agent Actions, Your AI Governance Policy Has a Structural Gap

GV.RM under NIST CSF 2.0 requires that organizations establish, communicate, and monitor a risk management strategy reflecting their risk appetite. Most organizations have done this for traditional IT risk — defining tolerable thresholds for availability loss, data exposure, or access risk and building controls around those thresholds. AI agents introduce a category most existing risk appetite statements do not address: autonomous action risk, meaning the risk that an agent takes a consequential action (sends a message, modifies a record, calls an external API) without human review. The Microsoft Agent Governance Toolkit released this week operationalises GV.RM for agent deployments: the Agent OS policy engine enforces action-level boundaries at runtime, Agent Compliance maps those policies to regulatory frameworks, and Agent Mesh’s trust scoring system provides a numerical basis for behavioural risk tolerance. Concrete action (GV.RM): Update your risk appetite statement to define a risk tier for autonomous agent actions. Specify which action categories require human approval, which are permitted with logging only, and which are prohibited regardless of agent instruction. Implement runtime enforcement against those tiers — a policy document without enforcement is not a control. NIST CSF 2.0 reference: nist.gov/cyberframework.

HARDENED

This newsletter does not constitute professional security advice. Security configurations and threat landscapes vary by organization. Consult a qualified security professional for implementation guidance specific to your environment.

How we work: HARDENED uses AI agents for research, drafting, and automation. Every issue is reviewed by humans before publication. If you spot an error, reply directly — we correct the record promptly.

Sources: Microsoft Open Source Blog (Agent Governance Toolkit, April 2, 2026), opensource.microsoft.com · GitHub microsoft/agent-governance-toolkit (MIT licence), github.com/microsoft/agent-governance-toolkit · Microsoft Tech Community (architecture deep dive), techcommunity.microsoft.com · InfoWorld (Agent Governance Toolkit coverage), infoworld.com · Help Net Security (Agent Governance Toolkit), helpnetsecurity.com · Socket.dev (package analysis), socket.dev · Phoronix (toolkit release), phoronix.com · Digital Applied (toolkit coverage), digitalapplied.com · Gravitee State of AI Agent Security 2026 (88% incident rate, 14.4% full approval figures), gravitee.io · Cybersecurity Insiders / Saviynt “CISO AI Risk Report 2026” (April 21, 2026), GlobeNewsWire, globenewswire.com · The Hacker News (CVE-2026-40372 ASP.NET Core), thehackernews.com · BleepingComputer (CVE-2026-40372), bleepingcomputer.com · GitHub dotnet/announcements (CVE-2026-40372 advisory), github.com/dotnet/announcements · Security Affairs (CVE-2026-40372), securityaffairs.com · eSecurity Planet (CVE-2026-40372), esecurityplanet.com · The Hacker News (CVE-2026-33032 nginx-ui MCPwn), thehackernews.com · Security Affairs (CVE-2026-33032), securityaffairs.com · Rapid7 (CVE-2026-33032 analysis), rapid7.com · Recorded Future (CVE-2026-33032 exploitation confirmation, April 13, 2026), recordedfuture.com · OSFI Guideline E-23 (effective May 1, 2027), osfi-bsif.gc.ca · NIST CSF 2.0 (GV.RM), nist.gov/cyberframework

Keep Reading