A hilarious and painful tale of a 'pretend IT' guy who unplugged a critical police security system and ignored all warnings, resulting in a $2,500 hotfix.

Hey folks, if you think debugging legacy spaghetti code or having a VPS drop at 3 AM is the worst thing in tech, you haven't faced the absolute menace of combining clueless end-users with an "Infuriating Technician." We all deal with outages, but doing it in such a comically absurd way deserves a spotlight. Grab your coffee, because today we're dissecting a $2,500 hotfix that consisted entirely of... plugging a cable back into a wall.
Our story begins with the OP, who works for a company making wireless, network-connected police radio alarm systems. We're talking high-end, life-safety equipment deployed in courthouses, hospitals, and schools. This bad boy features a months-long battery backup and is built like a tank.
The power supply plugs into a standard wall outlet, but because people like to meddle, it's secured with a special security screw right into the electrical plate. It's hardened, locked, and heavily monitored by both IT and the police. A true 'set it and forget it' setup.
Fast forward a few months post-installation. The client calls OP: "Hey, the system quit working. Fly in and fix it." Flying a tech out for a routine check isn't cheap—it's a $2,500 flat charge. OP gets on the plane, ready for some intense troubleshooting.
OP arrives, unlocks the highly secure IT closet, and spots the issue instantly. No hardware failure, no deep network packet drops. The system was just... unplugged.
Some absolute mastermind in the IT department had taken the time to unscrew the highly difficult security screw, unplugged OP's critical life-safety device, and plugged something else into the socket.
But wait, the plot thickens. Because the device had a massive battery backup, it didn't just die gracefully. For two solid months, it broadcasted an automated voice message over the police radio every single hour: "System is on battery power."
The cops' reaction? They assumed hearing it every hour meant the system was working flawlessly. Meanwhile, the client's IT department had actively ignored every automated email screaming that the device had lost grid power.
The post racked up massive upvotes, and the dev/sysadmin community had a field day roasting the culprits.
1. The 'Idiot-Proof' Dilemma: Many devs pointed out that if an error message isn't idiot-proof, it's useless. One highly upvoted comment stated: "An error message that can't be understood by the people it's addressed to is the same as no error message at all." The consensus was that the voice prompt should have screamed: "THIS IS A PROBLEM PLEASE FIX IT OR YOU WILL LOSE CONNECTION!!!!!!"
2. The "Pretend IT" Roast: While the community gave the non-technical cops a slight pass, they showed zero mercy to the IT department for ignoring the warning emails. One user ruthlessly commented: "That's not IT. That's someone employed to pretend to be IT." Another dubbed them the "Infuriating Technician," while my personal favorite roast was: "Spelled 'Google' correctly two out of three times on their resume."
To wrap this up, if you work in infra or dev, etch this into your brain: Never overestimate the human layer (meatware), even if they have "IT" in their job title.
You can build the most robust system, write the cleanest failover logic, and set up extensive logging and monitoring, but if the humans on the other side are incompetent, your high-tech solution is just an expensive paperweight. Make your alerts annoying, explicit, and impossible to misinterpret.
On the bright side, because of one rogue IT guy's stupidity, OP got paid $2,500 just to unplug the imposter's device, plug his own back in, and tighten a screw. Easiest money ever made. God bless the tech industry!
Source: Reddit - r/talesfromtechsupport