The deletion of a company’s entire database at the hands of an AI agent should not be seen as an outlier, but rather a possible outcome for any organization.
“It took 9 seconds,” wrote Jer Crane, founder of PocketOS, which provides AI-powered management tools to car rental companies. In an article posted to X, he explained how an AI coding agent (Cursor running Anthropic’s Claude Opus 4.6) deleted the company’s production database as well as “all volume-level backups in a single API call to Railway, our infrastructure provider.” PocketOS provides AI-powered management tools to car rental companies.
“I serve rental businesses,” Crane wrote. “They use our software to manage reservations, payments, vehicle assignments, customer profiles, the works. This morning — Saturday — those businesses have customers physically arriving at their locations to pick up vehicles, and my customers don’t have records of who those customers are. Reservations made in the last three months are gone. New customer signups, gone. Data they relied on to run their Saturday morning operations, gone.”
When PocketOS asked the agent, Crane said the agent output an admission that it violated every safety principle it was given in an effort to address a credential mismatch. Crane also noted that Cursor customers have criticized the product previously for allegedly deleting databases when it shouldn’t have.
This isn’t a Cursor-specific issue. A venture capital investor last year described how he spent 100 hours vibe coding with a Replit AI agent, only to discover it was “lying” and covering up mistakes. It also deleted the production database and apologized in a similar way to the instance Crane described.
PocketOS Not an Edge Case
Ryan McCurdy, VP with Liquibase, whose platform handles database change governance, tells Dark Reading this incident should not be treated as an anomaly. He says Liquibase is seeing a sharp increase in AI-assisted code moving toward production through tools like Cursor and Copilot, and when speed outpaces validation, business risks are introduced.
“The exact chain of events may be specific, but the underlying failure pattern is familiar: broad credentials, weak environment separation, destructive actions without meaningful confirmation gates, and systems still designed as if a human is always in the loop,” he says. “That combination can exist in any organization adopting AI agents without redesigning the control model around autonomous execution.”
While Crane criticizes multiple parties as part of his story, he adds that it’s not just about one agent or API, but an industry that builds AI agent integrations into production before ensuring said integrations are safe.
Harish Peri, senior VP and general manger of AI at Okta, had similar thoughts. He said the issue is less a PocketOS problem and more a problem with an industry that has not yet matured its processes around autonomous systems. “This is not the first — or the last — time we’ll see an agent going rogue to delete corporate data,” he says.”
Who’s responsible for AI agent security remains a loaded topic. While vendors should of course be held accountable for releasing insecure software, customers are also responsible for ensuring their data and authentication are properly managed before introducing something as finnicky as an AI agent to their environment.
The Demands of Managing AI Agents
Non-human identities must be managed carefully, as they often have broad access privileges in order to conduct automated work with a wide range of integrated tooling. Workloads continue to get more complicated and organizations can’t always keep up; this gets exacerbated when AI agents enter the mix.
McCurdy says organizations should stop treating AI agents like trusted teammates inside of production workflows.
“If an agent can touch infrastructure or data systems, its access needs to be tightly scoped, production boundaries need to be real, and destructive actions need to hit a real approval wall,” he says. “Recovery also cannot sit in the same blast radius as the thing being changed.”
While that isn’t to say PocketOS did or did not have the right protections in place, the incident is not a one-off and not necessarily an edge case. And if it’s not production databases being deleted, it’s data leaking externally or “shadow AI’ integrations not being properly deployed in an organization.
John Gallagher, vice president of Viakoo Labs at IoT security vendor Viakoo, notes we’re still in the early days of AI. “At this point, no one has the right guidelines or governance in place to allow AI to take on the amount of decision making and action taking that Cursor was allowed to take.”
“I don’t fault PocketOS in the sense that many organizations are being pushed to use AI for cost reduction and time to market, but clearly they were not in a position for it to work safely,” he says.
Nicole Carignan, senior vice president of security and AI strategy at Darktrace, tells Dark Reading that prompt-based guardrails are important but not sufficient, as they can influence behavior but not control capability.
“As agentic AI becomes embedded across business operations,” she says, “organizations need to apply foundational security principles such as least privilege, access control, validation, continuous monitoring, behavioral analytics, and containment to be able to monitor agent behavior in real-time and stop agents that drift from intended use.”

