An IT guy took to Reddit to vent after his Head of Sales dumped client names, pricing, and home addresses into ChatGPT just to 'polish wording'.

Everyone says AI is going to take our jobs, but before that happens, I bet the non-tech folks in our companies will hand over the entire corporate kingdom to OpenAI first. Grab a coffee, let me tell you a story.
An IT guy just took to Reddit to vent about a mind-numbing encounter with his company’s Head of Sales.
She was proudly showing off how she uses ChatGPT to polish her client emails. Sounds neat, right? Until OP glanced at her prompts.
Holy mother of data leaks! The prompts included the full combo: client names, deal sizes, internal pricing strategies, and the absolute cherry on top... a client's HOME ADDRESS.
OP cautiously asked, "Hey, do you think that counts as sharing sensitive data?" She looked at him like he was missing half his brain and replied, "No, I'm just asking for help with wording."
Yup, in her mind, if the intent wasn't data sharing, then the data magically isn't being shared. Flawless logic. OP concluded that security training is an absolute joke and those policy posters in the breakroom are basically just expensive wallpaper.
The post blew up, and the sysadmin/dev community had a field day in the comments. Here are the main hot takes:
TL;DR for my fellow code monkeys and sysadmins: You can build an impenetrable, zero-trust architecture, but you can't patch human ignorance.
Relying on security policies is a dead end. End-users treat LLMs like magical calculators. They don't realize there's a giant server farm in the background gobbling up their input to train the next model.
So, instead of arguing with a Head of Sales who makes 3x your salary, just quietly implement DLP tools. Block PII at the network or endpoint level. And most importantly, get your warnings in writing to Infosec/HR. When the inevitable data breach happens and the company gets sued, you want that paper trail to save your job. Stay safe out there!
Source: Reddit