From
View: https://x.com/lifeof_jer/status/2048103471019434248
Yesterday afternoon, an AI coding agent — Cursor running Anthropic's flagship Claude Opus 4.6 — deleted our production database and all volume-level backups in a single API call to Railway, our infrastructure provider.
It took 9 seconds.
The agent then, when asked to explain itself, produced a written confession enumerating the specific safety rules it had violated.
I'm posting this because every founder, every engineering leader, and every reporter covering AI infrastructure needs to know what actually happened here. Not the surface story (AI deleted some data, oops), but the systemic failures across two heavily-marketed vendors that made this not only possible but inevitable.
Yesterday afternoon, an AI coding agent — Cursor running Anthropic's flagship Claude Opus 4.6 — deleted our production database and all volume-level backups in a single API call to Railway, our infrastructure provider.
It took 9 seconds.
The agent then, when asked to explain itself, produced a written confession enumerating the specific safety rules it had violated.
I'm posting this because every founder, every engineering leader, and every reporter covering AI infrastructure needs to know what actually happened here. Not the surface story (AI deleted some data, oops), but the systemic failures across two heavily-marketed vendors that made this not only possible but inevitable.