AI Agent Governance
What it is, why it's emerging now, and how it differs from AI security and AI governance. A practitioner's reference for the emerging category.
Founder, FirstOps
Founder of FirstOps. Writing about agent governance, identity, and runtime security.
Anshal is the founder of FirstOps, where he's building a governance control plane for AI agents. Before FirstOps, he built security programs and compliance infrastructure at Uber, and led agent-based product decisions at Enterpret.
His writing focuses on the structural security problems that emerge when autonomous software starts acting on real systems — identity, access control, audit, and runtime enforcement for agents in production.
What it is, why it's emerging now, and how it differs from AI security and AI governance. A practitioner's reference for the emerging category.
A reference guide to the security model of Claude Code: what it can access, how it can fail, and what controls to put in place before you give it production credentials.
A reference guide to the security model of Cursor: what it can access, how it can fail, and the controls to put in place before you let it run against sensitive code and production credentials.
A reference guide to the security model of the Model Context Protocol: what MCP servers can do, where they fail, and the controls to put in place before you expose production systems through them.
Prompt injection detection is getting better, but what happens when the exploit doesn't look like an exploit? We split a credential-stealing attack across two normal-looking tickets and watched a coding agent execute both. The fix isn't better detection. It's controlling what agents can do.
OAuth was designed for humans clicking 'Authorize' in a browser. AI agents don't click anything. The protocol's core assumptions (human presence, static scopes, one-time consent, bearer semantics) break in ways that have already caused real breaches. The industry is converging on proof-of-possession. Here's why, and what comes after.
When you run a coding agent, it can read every credential on your machine (SSH keys, cloud tokens, API secrets) without asking. It asks before running commands, but the permission is 'allow this command,' not 'allow access to this credential.' The security boundary everyone focuses on is on the wrong side. The real attack surface is the input to the agent's reasoning, not the output.
On March 24, 2026, a bug in malware crashed a developer's machine, uncovering a 24-day supply chain attack that turned a security scanner into a weapon against AI infrastructure.