Blog
California Just Deleted the "AI Did It" Defense
We’ve all been hoping to use the convenient defense: We didn’t do it — it was the AI. Nobody believed this would hold up forever. But when the alternative was slowing down while your competitors didn’t, you took the risk. But just like when you told your 3rd grade teacher the dog ate your homework — she wasn’t buying it, and …
Risk and Progress: It's Complicated
Agentic AI adoption is moving faster than anyone’s ability to fully secure it. But waiting for the perfect solution isn’t a strategy — it’s how you get left behind. The real question isn’t whether to adopt, it’s how to move fast without losing control.
Maybe Don't 1.1: AI Agents Need Guardrails, MCP Isn't Enough
Maybe Don’t v1.1 expands guardrails beyond MCP to shell commands, adds a policy test matrix, defaults to audit-only mode for painless adoption, and generates AI-powered executive reports showing the value your guardrails deliver.
ISO 42001 Compliance for AI Agents
ISO 42001 establishes requirements for AI management systems—risk controls, audit trails, human oversight. Maybe Don’t provides the runtime enforcement layer that ensures your policies actually get enforced when agents take action, preventing catastrophic failures before they happen.
Why Your AI Agents Need Bowling Bumpers Too
AI agents are writing code faster than traditional guardrails can catch problems. Maybe Don’t AI sits between your AI agents and MCP servers, blocking dangerous operations before execution and teaching agents your standards through verbose deny messages—because your codebase isn’t a bowling game and gutter balls cost more than buying the next round.
Your AI Has Zero Scars
Your AI agent has knowledge without wisdom—Maybe Don’t gives it the guardrails that life experience never taught it.
Guiding AI Agents Through Error Messages
AI error messages that guide behavior instead of just blocking it transform agents from rule-followers into intelligent systems that understand your standards.
When AI Agents Go Rogue
Maybe Don’t AI provides custom guardrails that catch dangerous AI agent actions before they execute—because generic AI safety features don’t understand your specific business logic, and waiting until an agent deletes your database or orders 2,000 lbs of beef is too late.
MCP is The Protocol Running Your AI Strategy
MCP is the protocol connecting AI agents to your systems right now, and it has zero built-in security for the chaos agents create—get guardrails before you need them.
Asimov's Three Laws of Robotics: From Science Fiction to AI Reality
Isaac Asimov's Three Laws of Robotics were science fiction's first AI safety framework. Here's why they exist, where they break down, and how Maybe Don't AI turns the idea into working guardrails.