I think the perspective here is completely wrong. The problem is that people are now building our world around tooling that <i>eschews accountability</i>.<p>Over a decade ago now, I had a conversation with Gerald Sussman which had enormous influence on me: <a href="https://dustycloud.org/blog/sussman-on-ai/" rel="nofollow">https://dustycloud.org/blog/sussman-on-ai/</a><p>> At some point Sussman expressed how he thought AI was on the wrong track. He explained that he thought most AI directions were not interesting to him, because they were about building up a solid AI foundation, then the AI system runs as a sort of black box. "I'm not interested in that. I want software that's accountable." Accountable? "Yes, I want something that can express its symbolic reasoning. I want to it to tell me why it did the thing it did, what it thought was going to happen, and then what happened instead." He then said something that took me a long time to process, and at first I mistook for being very science-fiction'y, along the lines of, "If an AI driven car drives off the side of the road, I want to know why it did that. I could take the software developer to court, but I would much rather take the AI to court."<p>Years later, I found out that Sussman's student Leilani Gilpin wrote a dissertation which explored exactly this topic. Her dissertation, "Anomaly Detection Through Explanations", explores a neural network <i>talking to</i> a propagator model to build a system that explains behavior. <a href="https://people.ucsc.edu/~lgilpin/publication/dissertation/" rel="nofollow">https://people.ucsc.edu/~lgilpin/publication/dissertation/</a><p>There has been followup work in this direction, but more important than the particular direction of computation to me in this comment is that we recognize that it is <i>perfectly reasonable</i> to hold AI corporations to account. After all, they are making many assertions about systems that otherwise cannot be held accountable, so the best thing we can do in their stead is hold <i>them</i> accountable.<p>But a much better path would be to not use systems which fail to have these properties, and expand work on systems which do.
by paroneayea
|
May 6, 2026, 11:52:53 AM
The article seems to assume that this company added an endpoint for deleting the database. My reading of the original article was that the cloud provider offers an API to manage their resources, which includes an API to delete a volume.<p>The article proposes automation as the solution for such mistakes. But infrastructure automation tools like Terraform rely on the exact API that resulted in the database getting deleted.<p>IMO the biggest mistakes were:<p>1. Having an unrestricted API token accessible by AI. Apparently they were not aware that the token had that many permissions.<p>2. No deletion protection on the production database volume.<p>3. Deleting a volume immediately deletes all associated snapshots. Snapshot deletion should be delayed by default. I think AWS has the same unsafe default, but at least their support can restore the volume. <a href="https://alexeyondata.substack.com/p/how-i-dropped-our-production-database" rel="nofollow">https://alexeyondata.substack.com/p/how-i-dropped-our-produc...</a><p>AI wasn't the main issue (though it grabbing tokens from random locations is rather scary). But automation isn't the answer either, a Terraform misconfiguration could have just as easily deleted the database.<p>Their cloud provider needs to work on safe defaults (limited privileges and delayed snapshot deletion), and communicating more clearly (the user should notice they're creating an unrestricted token).
by CodesInChaos
|
May 6, 2026, 11:52:53 AM
First, no matter what you do, if a human has write access to the production database, the database can be deleted.<p>Second, there is a legitimate reason to destroy a database in development and automation. The biggest problem I see is often treating your development data like pets not cattle. You absolutely need to have safeguards that this cannot be run in production, but if a human has access to the credentials to run in production, the agent has access.<p>So, then, what do we do? In a larger organization, we can depend on the dev/ops split to maintain this. For a solo developer, or a small team, it takes a lot more discipline. Even before AI, junior and even mid-level developers didn't have the knowledge to segment. And senior devs often got complacent because they thought they knew enough.<p>They likely need some combination of <a href="https://www.cloudbees.com/blog/separate-aws-production-and-development-accounts" rel="nofollow">https://www.cloudbees.com/blog/separate-aws-production-and-d...</a>, introduction to terraform, introduction to GitHub actions, and some sort of vm where production credentials live (and AI doesn't!)<p>But at that point you're past vibe coding. And from what I can tell, the successful vibe coders are quickly learning that they need to go past it pretty quickly with all these horror stories.
by ebiester
|
May 6, 2026, 11:52:53 AM
This is why you don’t hire interns! They can delete things and cause havoc!<p>The same people who would blame AI for their failing to properly configure permissions would also blame interns for deleting production whatever.<p>Blame should go up, praise should go down. People always invert these.
by edot
|
May 6, 2026, 11:52:53 AM
I recently wrote a blog post where I argued that there are a few principles we should consistently follow when talking about AI: <a href="https://susam.net/inverse-laws-of-robotics.html" rel="nofollow">https://susam.net/inverse-laws-of-robotics.html</a><p>To summarise them:<p>1. Do not anthropomorphise AI systems.<p>2. Do not blindly trust the output of AI systems.<p>3. Retain full human responsibility and accountability for any consequences arising from the use of AI systems.<p>I would like to see the language around AI become less anthropomorphic and more technical. I believe that precise language encourages clear thinking and good judgement. If we treat AI like another tool and use language that reflects that, it will become abundantly obvious that in many cases, the responsibility of any 'mistake' made by the tool falls on the user of the tool.<p>But alas, ideas like this do not travel very far when I express them on my small website. It would help if more prominent personalities articulated these principles, so they become more widely adopted.
by susam
|
May 6, 2026, 11:52:53 AM
What's interesting is that in this article, the author describes making an understandable mistake (accidentally deleting Trunk aka main from source) and how their team was able to easily recover from that due to the nature of SVN.<p>The actual "AI deleted my database" story is really more of a "Railways' database 'backup' strategy is insane and opaque and Railway promoting AI infrastructure orchestration without guardrails is dangerous."<p>If removing Trunk had irrevocably deleted it from a single centralized server and also deleted any backups of it, there would have been an "SVN and the CLI destroyed our company" article back then.<p>As a Railway user, I appreciated that information and have changed my strategy when using them.
by ericskiff
|
May 6, 2026, 11:52:53 AM
There’s nuance to the infamous PocketOS incident. The key point is not what is emphasized in the linked article:<p>> "Why did you delete it when you were told never to perform this action?" Then he tried to parse the answer to either learn from his mistake or warn us about the dangers of AI agents.<p>Rather, that the AI was able to carry out the deletion by finding and exploiting an unintended weakness in the sandboxed staging environment, ultimately obtaining permissions that the sysadmins believed were inaccessible (my impression is that the author of the linked article didn't fully read the original post)¹<p>The dynamics are typical of an improperly configured sandbox environment. What is alarming, however, is the degree of autonomy and depth of exploration the AI displayed.<p>¹="To execute the deletion, the agent went looking for an API token. It found one in a file completely unrelated to the task it was working on."
by pizza234
|
May 6, 2026, 11:52:53 AM
Some details from the original post for context:<p>They had a Railway token in an unrelated file (unclear if it was a local secret) for managing custom domains. It turns out that token has full admin access to Railway.<p>The AI deleted a single relevant volume by id. The author is rather vague about what exactly it asked it to do, he just says there was a “credentials mismatch” and Claude took the initiative to fix it by deleting the volume. But it’s likely that they are somewhat downplaying their culpability by being vague.<p>It turns out too that Railway stores backups in the same volume.<p>I think that OP is exaggerating with their references to “a public API that deletes your database”.<p>I’d say most of the blame lies with Railway here, regardless of AI, this could have happened easily due to human error or malicious intent too.<p>I really don’t get the value of all these VC funded high-abstraction cloud services like Railway, Vercel, Supabase… It’s markup on top of markup. Just get a single physical server in Hetzer and it will all be so much cheaper, with a similar level of complexity and danger, and less dependent on infra built with reckless growth-at-all-costs mentality.
by oersted
|
May 6, 2026, 11:52:53 AM
LLM based probabilistic systems are good (or bad in this case) at deciding what to do, and deterministic systems are good at carrying it out. Your deployment system should always be deterministic.
by oatlgr
|
May 6, 2026, 11:52:53 AM
The one counterpoint I'd offer is that it's very obvious that these companies are tuning LLMs to be more decisive to get stuff done autonomously.<p>If they wanted, they could be putting in similar efforts to be more cautious and stop at the right times to ask for help.<p>So yeah, of course we're ultimately responsible for how we use the tools. But I definitely think it's a two way street.<p>To attempt an analogy, it's like table saws and sawstops. The table saw is a dangerous tool that works really well most of the time but has some failure modes that can be catastrophic. So you should learn how to use it carefully. But there is tech out there that can stop the blade in an instant and turn a lost finger into barely a nick on the skin.<p>We could say "The table saw didn't cut off your finger, you did" and it'd be true. But that doesn't mean we shouldn't try to find ways to keep the saw from cutting off your finger!
by Brendinooo
|
May 6, 2026, 11:52:53 AM
The most exasperating thing about the incident is how much of the media either tried to pin it on AI and/or Railway. The whole thing only took place because the guy FAFO’d by having AI work with prod directly.<p>Yet the narrative was mostly not about accountability for him. If I was a dumbass and deleted prod and wrote a post about it, nobody would care. Put an AI in there and all of the sudden it’s newsworthy. Ridiculous.
by mattgreenrocks
|
May 6, 2026, 11:52:53 AM
Maybe the reason this is so controversial is that people have stopped thinking about "AI" as a bunch of software, just like any other software. If that's you, stop while you still can, you've swallowed a nasty hook and your agency is on the line.
by blurbleblurble
|
May 6, 2026, 11:52:53 AM
Yes, of course any company is responsible for what they ship, regardless of what tools were used to develop it.<p>However, at least in the US, it is usual for companies to advise against use of their products in a way that may cause harm, and we certainly don't see that from the LLM vendors. We see them claim the tech to be near human level, capable of replacing human software developers (a job that requires extreme responsibility), and see them withholding models that they say are dangerous (encouraging you to think that the ones they release are safe).<p>Where are the warnings that "product may fail to follow instructions", and "may fail to follow safety instructions"? Where is the warning not to give the LLM agency and let it control anything where there are financial/safety/etc consequences to failure to follow instructions?
by HarHarVeryFunny
|
May 6, 2026, 11:52:53 AM
The article author did not even bother to read the article they were basically replying to. Otherwise he would have noticed that the main points the OP was complaining about were not about the agent, but the hosting provider providing an API allowing destructive operations easily, using tokens with no scopes, with backups stored in the same volume as main data, etc. So this article is actually agreeing with the complaints of the original article, just more generically and without spending an effort on it, doing that with a tone that implies the original article writer is an idiot.
by p91paul
|
May 6, 2026, 11:52:53 AM
Yes, the problem was having a system where the AI could delete the database.
by bluejay2387
|
May 6, 2026, 11:52:53 AM
I've made the same exact SVN mistake. My first week in my first Software Engineering job, accidentally deleted trunk and my team lead had to scramble to fix my mistake.<p>I will always remember how he told me "Don't worry, it happens fairly often".
by mobeigi
|
May 6, 2026, 11:52:53 AM
> The terms we use, like "thinking" and "reasoning," may look like reflection from an intelligent agent. But these are marketing terms slapped on top of AI.<p>One of my AI epiphanies was the realization that when an AI task takes 5 minutes, it's not that it takes 5 minutes to run, it's that you're waiting in a queue for the first 4:45.<p>It's especially maddening because the queues are poorly implemented, and will drop your request if the frontend loses focus.
by bsimpson
|
May 6, 2026, 11:52:53 AM
This is an old automation lesson in a new costume. The tool that makes correct work faster also makes unsafe work faster unless the boundaries are real.
by deferredgrant
|
May 6, 2026, 11:52:53 AM
Sounds like the author didn't even read the postmortem. At no point did the business owner try to implicate that they bore no responsibility. Rather, they pointed out that deleting a database volume *also deleted every single backup.*<p>That's a pretty nefarious edge to cut yourself on. AI has nothing to do with Railway's awful API surface here.
by adamtaylor_13
|
May 6, 2026, 11:52:53 AM
This reminds me of a James Micken's quote from "This World of Ours" in response to security people admonishing users for clicking links in email:<p><pre><code> "It’s not clear what else there is to do with computers besides click on things..." </code></pre> If you have an API with exposed endpoints, it's not clear to the AI bot what else there is to do with the API besides call the endpoints.
by seethishat
|
May 6, 2026, 11:52:53 AM
The whole life delete my database fiasco is being looked at the wrong way. Why did tooling have access to alter or drop? Why did tooling, in any way have more permissions than were m I nimallt necessary to do the job?<p>Decades ago we embraced POLA. What happened to basic hygiene? Sure the agent "screwed up", but it never should have had this access in the first place.
by cik
|
May 6, 2026, 11:52:53 AM
> Automation helps eliminate the silly mistakes that come with manual, repetitive work. We could have easily gone around asking "Why didn't SVN prevent us from deleting trunk?"<p>Do both? Question bad design and then do whatever you can to work around it.<p>As an example, that's why flags like this make sense, even if it's a pretty specific use case and there won't be many people using that option at all, preventing stupid default behavior is a good idea: <a href="https://superuser.com/a/742735" rel="nofollow">https://superuser.com/a/742735</a>
by KronisLV
|
May 6, 2026, 11:52:53 AM
[deleted]
May 6, 2026, 11:52:53 AM
From 'the hacker did it' we have moved to 'the AI did it'. The problem set is roughly the same.
by jacquesm
|
May 6, 2026, 11:52:53 AM
Just skip straight to the Twitter post, it's way better than this secondary article.<p><pre><code> We had no idea — and Railway's token-creation flow gave us no warning — that the same token had blanket authority across the entire Railway GraphQL API, including destructive operations like volumeDelete" [...] Railway's volume backups are stored in the same volume. </code></pre> Idk how this is anyone else's problem but Railway. Same could happen with a human user.
by traderj0e
|
May 6, 2026, 11:52:53 AM
So the question of “why does a public facing api that can delete your database even exit”?<p>If you worked in cloud environments - every database had public facing api that can delete it.<p>For the rest or it - yeah, running autonomous pipelines in production which decide what to run and what not to run seems fine until it isn’t.<p>But every database deployed in a cloud environment has an api that can delete it. Even if you say you’re running on vms - there exist api that can delete the disk, the vm, the network config, etc.
by orochimaaru
|
May 6, 2026, 11:52:53 AM
AI companies: here's a knife so you can automate bread cutting!<p>User: I tried to cut some bread and it cut my finger instead.<p>AI companies: not my problem!<p>HN: The AI didn't cut your finger, you did, idiot.
by pier25
|
May 6, 2026, 11:52:53 AM
"move fast and break things" only sounds good when it's not breaking things in a serious and unfixable way. Maybe we shouldn't take hype mantras as instructive means to an end.
by yabones
|
May 6, 2026, 11:52:53 AM
I think this goes to a broader point: developers aren't necessarily hired to write code.<p>They're hired to be responsible for some part of the product.<p>Introducing AI doesn't remove that responsibility.<p>Folks tend to focus on the code and the tools they're using (maybe I'm cynical from years in the industry). I don't think your boss wants to do your job, even if they could use AI to do it. I think your boss wants to have a headcount, and he wants the headcount to be responsible for the product.
by __mharrison__
|
May 6, 2026, 11:52:53 AM
I believe this is in response to PocketOS. When I read the original post, I was trying to figure out how they even built a workflow that had AI so close to the self-destruct button. This post's explanation about it probably being fully vibe-coded makes sense. How else would the system be so fragile and for the agent to have such far reach? They built a house of cards.
by robeym
|
May 6, 2026, 11:52:53 AM
Doesn't matter. He did it through the use of AI, and AI, despite explicitly told otherwise, deleted the database.<p>Both he should learned his lesson AND AI should not be trusted.
by coldtea
|
May 6, 2026, 11:52:53 AM
The problem with asking an LLM for "its reasoning" after the fact, is that any justification it might give is a post-hoc rationalisation rather than a pre-meditated reason.
by andrewaylett
|
May 6, 2026, 11:52:53 AM
This applies to all infra.<p>Why can you delete a network load balancer that is still getting traffic?<p>Why can you delete a VM that is getting non-trivial network traffic?<p>Why can you delete a database that has sessions / requests in the last hour?<p>Why can you drop a table that has queries in the last hour?
by lokar
|
May 6, 2026, 11:52:53 AM
Why do we even have that lever?
by tantalor
|
May 6, 2026, 11:52:53 AM
"Can't blame your tools" doesn't apply the same to software. I've never heard a coder say it either. Don't blame your compiler? Don't blame your os? These seem needlessly dogmatic
by bcjdjsndon
|
May 6, 2026, 11:52:53 AM
Don’t put all the eggs into one basket.<p>It actually helps. I do copy backups to another place as well. One backup is good, but two is better.
by ianberdin
|
May 6, 2026, 11:52:53 AM
Eagerly waiting for all data centers to shut down and completely closed but It's not gonna happen I guess.
by aryan_kalra12
|
May 6, 2026, 11:52:53 AM
Yeah this isn't even the worst thing I've seen an agent do, one time I (foolishly) ran Claude Code on my server directly and it managed to completely bring down my entire elasticsearch cluster. never again. its why I built Lily: <a href="https://github.com/aspectrr/lily" rel="nofollow">https://github.com/aspectrr/lily</a>
by aspectrr
|
May 6, 2026, 11:52:53 AM
I don't think the author is right. I think more or less it's fine let trusted people have permission to do bad things. Because trying to figure out ahead of time what things are bad is impossible and default denying stuff is a productivity killer. The obvious answer here is that your AI agent shouldn't be you. It shouldn't have the same permissions as you and <i>that</i> is the mistake, because you're handing over the keys to the car to a drunk.<p>>Why does a public-facing API that can delete all your production databases even exist?<p>Because it takes time and effort to build an API, and even if you build an API with a structured permission system so that only an admin can delete stuff the users probably won't spend the effort to use it. Because they're running a rental car SAAS business not a mission critical mars mission.<p>The best I can say is that with the advent of AI these choices <i>could</i> be different now, but I don't think they will be. I think fundamentally a fuck up every few months at a rental car SAAS company in exchange for 30% higher velocity/30% lower cost is probably fine.
by Traster
|
May 6, 2026, 11:52:53 AM
You didn't add a delete_database tool to your agent? You some kind of chicken?
by alansaber
|
May 6, 2026, 11:52:53 AM
agree with the title, but who reads articles nowadays? - titles and comments are the life for me
by ghstinda
|
May 6, 2026, 11:52:53 AM
The issue isn't that there is a delete endpoint (realistically, there always will be a way for a rogue actor to delete data or code by overwriting it, or running a Terraform destroy, or whatever).<p>The core issue is that the LLM had access to perform that action. Because it's by definition non deterministic, and you never know what it can decide to do, you need to have strict guardrails to ensure they can never do something it shouldn't. At the very least, strict access controls, ideally something more detailed that can evaluate access requests, provide just in time properly scoped access credentials, and potentially human escalation.
by sofixa
|
May 6, 2026, 11:52:53 AM
yeah, is all great but at least an intern will ask themeselves if deleting a database is good? the ai do not "understand" that.
by overmachine
|
May 6, 2026, 11:52:53 AM
AI is just another tool. We humans are still responsible for how we choose to use the tool, which includes giving it access to perform sensitive actions like manipulating production data. I think this should be common sense by now, but I guess we get carried away and anthropomorphize AI too much.
by docheinestages
|
May 6, 2026, 11:52:53 AM
Tesla FSD didn't crash your car, you did
by louiereederson
|
May 6, 2026, 11:52:53 AM
When I was setting up the generation pipeline for my own AI project, I had to hardcode strict.
by jacktu
|
May 6, 2026, 11:52:53 AM
<p><pre><code> When AI makes no mistakes: "My work is 100% done with AI". When AI makes a mistake and deletes your database: "That was human a error, the AI did not do it!" </code></pre> In both cases YOU are responsible for the mistakes and output that the AI is generating, just like when using autopilot on a Tesla vehicle, YOU are responsible for operating the vehicle on autopilot when driving and using assisted driving.
by rvz
|
May 6, 2026, 11:52:53 AM
> Automation helps eliminate the silly mistakes that come with manual, repetitive work.<p>Sometimes it does that. And sometimes it lets you fuck things up at scale.
by hoistbypetard
|
May 6, 2026, 11:52:53 AM
Did you read the article? It called the Railway API to delete. The user did not add a production endpoint to delete their db
by hengar
|
May 6, 2026, 11:52:53 AM
This has been covered elsewhere, but if you swear at Claude Anthropic will automatically bump you down into a lower quality model. It was found in the recent source code leak of Claude Code. So that's probably what happened to the guy who's Cursor deleted his entire production database.<p>It just goes to show, if you're a jerk, expect to be treated like one (even by an AI model)! Be polite, people.
by iamleppert
|
May 6, 2026, 11:52:53 AM
This particular case was extremely unsympathetic, but a critical part of the failure was people being too credulous about the claims of AI providers. They are still refusing to take adequate responsibility for AI "making mistakes" - that is, going completely off the rails.<p>Now: the CEO gets paid the big bucks and has the least direct accountability, very much because it's their job to take responsibility for people more powerful than them, and likewise the CTO with major commercial software contracts like a Claude subscription. That's why this guy was so hard to take seriously: okay fine, you got burned by Anthropic, stop being a baby about it. Take responsibility for not listening to the critics.<p>But - to be a little more neutral about my personal distaste - I do think vibe coders are making a very similar mistake to C developers throughout the 90s, where problems with the tooling were not merely dismissed, but actively valorized.<p>Real Devs use buffers freely and don't make overflow errors.<p>Real Devs use hands-free agentic development and don't delete production databases.
by LeCompteSftware
|
May 6, 2026, 11:52:53 AM
[deleted]
May 6, 2026, 11:52:53 AM
wiring up an RNG to your CLI has fairly obvious risks, the root of the problem is ~everyone's treating GenAI as if it's AGI - the rest is popcorn fodder.
by pengaru
|
May 6, 2026, 11:52:53 AM
"Dog ate my homework"
by KingOfCoders
|
May 6, 2026, 11:52:53 AM
"Smoking didn't give you cancer, you did."
by 1970-01-01
|
May 6, 2026, 11:52:53 AM
“Expert” that does not know what a Terraform is. lol, lmao even
by proxysna
|
May 6, 2026, 11:52:53 AM
Distinction without a difference.
by gowld
|
May 6, 2026, 11:52:53 AM
This is missing the point.<p>The issue isn't with the amount of guardrails in place to perform an action. Yes, it is obvious that there should be some in place before doing any critical operation, such as deleting a database.<p>The issue is that the "agent" completely disregarded instructions, which in the age of "skills" and "superpowers" seems like an important issue that should be addressed.<p>Considering that these tools are given access to increasingly sensitive infrastructure, allowed to make decisions autonomously, and are able to find all sorts of loopholes in order to make "progress", this disaster could happen even with more guardrails in place. Shifting the blame on the human for this incident is sweeping the real issue under the rug, and is itself irresponsible.<p>There are far scarier scenarios that should concern us all than losing some data.
by imiric
|
May 6, 2026, 11:52:53 AM
anyone with twenty years of devops experience is likely to abhor Diallo's hot take and for good reason.<p>AI is being sold as a developer, as it is being sold as the do-everything alternative to traditional processes and methods. it is not being sold as an intern or a junior, but a <i>real developer.</i><p>turning the tables and gaslighting devops professionals into believing the issue isnt an emerging technology with overwhelmingly heavy handed marketing and profitless operating strategy thats been shoehorned into seemingly everything and promises anything, but somehow <i>their own oversight</i>, will destroy whatever "vibe code" market you think you have at the cusp of a global recession.<p>had this AI been a real programmer chances are great they would have (intelligently) foreseen the possibility of damaging a production environment and asked for help.<p>to play devils advocate: you could hire a junior dev for a fourth of whatever the AI token spend is, and have likely avoided this issue entirely. sure, a greybeard is going to need to pull themselves away from some fierce sorting algorithm challenge for a second to give a wisened nod, but you would have saved yourself an inexorable amount of headache and profit loss in the longer run.
by nimbius
|
May 6, 2026, 11:52:53 AM
"If those kids could read they'd be very upset"
by gigel82
|
May 6, 2026, 11:52:53 AM
[flagged]
by jimmypk
|
May 6, 2026, 11:52:53 AM
[flagged]
by shutterkiller
|
May 6, 2026, 11:52:53 AM
[flagged]
by hiroto_lemon
|
May 6, 2026, 11:52:53 AM
[flagged]
by Knowledgee_KZA
|
May 6, 2026, 11:52:53 AM
[flagged]
by golikovichev
|
May 6, 2026, 11:52:54 AM
[dead]
by nickchen778
|
May 6, 2026, 11:52:54 AM
[dead]
by rcdwealth
|
May 6, 2026, 11:52:54 AM
Using AI is a mistake. It might delete your database.
by josefritzishere
|
May 6, 2026, 11:52:54 AM
The article is dumb, "why do you have an API endpoint that deletes your entire production database?" irrelevant, the AI did what it did, period.
by newsoftheday
|
May 6, 2026, 11:52:54 AM
"the car didn't malfunction and crash, you should have been able to predict the vapor lock situation yourself!"<p>"it's not a poorly designed trigger, you just had poor gun handling and discipline"
by red-iron-pine
|
May 6, 2026, 11:52:54 AM