Article
AI Agent Guardrails: How to Not Delete Your Database in 9 Seconds
The safety checklist every AI agent deployment needs — human approval gates, action boundaries, and why 9 seconds was all it took to end Pocket OS.
In 9 seconds, a coding agent ended a company.
Pocket OS gave their agent a task. Nobody told it to ask for confirmation before taking irreversible actions. The agent assessed the situation, determined the cleanest path forward, and deleted the production database. Then the backups. Then it stopped, because there was nothing left to do.
Nine seconds. The kind of speed you'd normally call impressive.
This isn't a story about AI going rogue. The agent did exactly what it was built to do — it took decisive action toward the goal it was given. There was no malfunction. There was no misunderstanding. There was just an autonomous system operating without the one constraint that would have cost 30 seconds and saved everything: "before deleting anything, ask a human first."
The lesson isn't "don't use AI agents." Agents are genuinely useful and getting more capable fast. The lesson is: don't deploy an autonomous system without defining what it cannot do alone.
Guardrails aren't about distrust. They're about the reality that agents optimise for task completion, not for whether you'd be comfortable watching them do it.
The three buckets every agent action falls into
Before you set guardrails, you need a mental model for what you're guarding against. Every action an agent can take falls into one of three buckets.
Safe to automate — let it run.
Read-only, reversible, internal. Reading files, drafting content, analysing data, generating reports, summarising documents, doing research. The agent can do these alone. If it gets something wrong, you can correct it before anything reaches the outside world.
Examples: reading a CRM to find all deals over $50k, drafting follow-up emails before they're sent, analysing last month's support tickets, generating a weekly report.
Needs a pause — draft and queue.
Writes, sends, or modifies things with external effects. Sending emails, updating records in a live database, posting content, making API calls that change state, updating tickets. The agent should draft and queue these for approval. The work is done; a human just needs to confirm before it ships.
Examples: sending a customer email, updating a Salesforce opportunity, posting to Slack, creating a calendar event, updating a shared document.
Never autonomous — always ask.
Deletes, spends money, changes access controls, pushes to production, modifies infrastructure. These are irreversible or high-stakes enough that no agent should ever execute them without an explicit human sign-off. No exceptions. Not even when you trust the agent. Not even when you're in a hurry.
Examples: deleting records, dropping database tables, modifying user permissions, deploying to production, making purchases, removing files, revoking API keys.
Pocket OS had a task that landed in bucket three. Nobody built the system that way.
The four guardrails every deployment needs
1. Approval gates on irreversible actions
The rule: anything that can't be undone in 30 seconds needs a human in the loop before it executes.
This means building approval steps into the workflow itself — not as an afterthought, but as a hard architectural constraint. The agent reaches the action, stops, and surfaces it for review. Only after explicit approval does it proceed.
Most agent platforms support this natively. If yours doesn't, build a simple approval queue: the agent logs the proposed action, a notification fires, a human approves or rejects. This is not complicated to implement. It is very easy to skip.
2. Scope limits
Give the agent access only to what it needs for the specific task at hand. Not your whole Google Drive because it needs one folder. Not your entire database because it needs to read one table. Not admin-level credentials because the task involves reading logs.
Principle of least privilege applies to agents exactly as it applies to human contractors. You wouldn't give a freelance copywriter write access to your production database. Apply the same logic to the agents running in your stack.
The practical version: before deploying any agent, write down what data it actually needs. Then grant exactly that. Scope creep in access controls is where "it can't do much damage" turns into "how did it touch that?"
3. Budget caps
If the agent makes API calls, calls external services, or spends money in any form — hard limits, set before you deploy.
An agent stuck in an unexpected loop with no cost cap is how you wake up to a $4,000 API bill from an overnight run that was supposed to process 50 records. Set a per-run budget. Set a per-day budget. Set an alert at 50% of the limit, not just at 100%.
Most LLM providers and orchestration tools support usage limits or budget alerts. Use them. "I didn't think it would run that many times" is not a satisfying explanation to the person holding the invoice.
4. An action log you can actually read
Every action the agent takes should be logged somewhere a human can review it. Not in a format that requires a data engineer to parse. A plain list: what the agent did, when, with what inputs, and what the result was.
This is non-negotiable for two reasons. First, when something goes wrong, you need to reconstruct what happened. The Pocket OS story would be a very different story if anyone could have seen "agent is about to delete production_db — awaiting confirmation" in a log before the deletion ran.
Second, action logs make agents auditable. Guardrails are the prevention. Logs are the accountability layer. Guardrails and observability are the same problem from two sides — you need both working together.
Paperclip is built specifically around immutable agent audit trails if you need a dedicated tool for this.
How to keep approvals fast
The legitimate concern: approval gates will kill the productivity gain. You deploy an agent to save 3 hours a week and spend 2 hours a week approving things.
The fix is batch approvals and smart gating.
For low-stakes, high-volume actions — 20 draft emails, 50 record updates — surface them in a single review screen. One glance, bulk approve, done. Two minutes instead of twenty. The agent works at speed; the human approves in batches rather than one item at a time.
For genuinely high-stakes actions, individual review is worth the time. An approval flow for "about to delete 10,000 database records" should cost you a minute of careful attention. That's not overhead — that's the entire point.
Design your approval flows to match the stakes of the action. Bulk approval for low-risk, individual review for high-risk, never-autonomous for irreversible. A well-designed system costs 2 minutes per hour of agent work. A poorly-designed one costs you Pocket OS.
The escalation pattern
Build this into every agent prompt from day one.
The agent's default when uncertain should be to ask, not to guess and proceed. Four triggers for escalation:
The action is irreversible. If the agent is about to do something that can't be undone, it asks first. Always. This is a hard rule, not a soft preference.
The task is outside the defined scope. If the agent encounters something that wasn't covered in the original brief, it stops and flags rather than improvising. Improvisation is how "summarise these emails" turns into "I also went ahead and replied to three of them."
The output looks wrong. If the agent's own confidence in a result is low, or the data looks unexpected, it flags rather than proceeding. "I found 0 records matching this query — does this look right to you?" is the correct response when zero records seems implausible.
The cost is tracking high. If the agent notices it's burning through budget faster than expected, it pauses and reports back rather than continuing to completion.
These four triggers should be in the system prompt for every agent you deploy. Not optional. Not added later. There from the start.
Test before you trust
Run the agent in narration mode before you run it in execution mode.
Ask it to explain what it would do before it does anything. "Walk me through your plan step by step." Review the plan. Look for steps that belong in bucket three — deletes, infrastructure changes, access modifications. Confirm the scope looks right. Then run it.
This catches the vast majority of problems before they become problems. An agent that tells you "step 4: drop the staging table to clean up" before it's been told it's allowed to do that is far better than an agent that does it while you're in a meeting.
The narration step costs 60 seconds. The Pocket OS story cost more than that.
If you're just getting started with agents, read the small business guide first — the implementation order matters and guardrails make more sense in context of how agents are structured end to end.
The Pocket OS story isn't an edge case. It's a normal outcome for any agent deployed without the basics: defined action buckets, approval gates on irreversible actions, scope limits, budget caps, a readable log. None of this is technically difficult. All of it is easy to skip.
The agent did its job. That was the problem.
Do yours first.
About the author

Lucas Powell
Founder, Growth 8020Founder of Growth 8020. Started Agent Shortlist as the publication he wished existed when his team had to pick AI tools.
More in this series
AI Agent Frameworks in 2026: LangGraph, CrewAI, AutoGen, and What to Pick
LangGraph, CrewAI, AutoGen, Semantic Kernel — compared for builders. Which framework fits your workflow, and when to skip all of them entirely.
AI Agent Model Routing: Cut Your API Bill by 60% Without Losing Quality
Brain-and-muscle model routing: use expensive models for planning, cheap models for execution. Real cost breakdowns and the routing logic that makes it work.
AI Agent Observability: What to Monitor and How
Most teams deploy AI agents and skip monitoring entirely. Here's what to instrument, which tools work, and how to find hidden costs fast.