Tired of overly polite LLMs? The Caveman plugin cuts Claude's output tokens by 75% and speeds up coding by forcing it to speak unga-bunga.

Sup guys. Have you ever wanted to punch your screen because an LLM starts with "I'd be happy to help you with that" for the 100th time today? "Here is the code you requested...", "Let me summarize...". Shut up and give me the code! When you're burning midnight oil on a tight deadline, watching an AI type out polite fluff while eating up your tokens is pure agony.
Today, I'm bringing you a magical artifact that forces your ai tools to skip the throat-clearing, get straight to the point, and more importantly... save your hard-earned API credits.
Here's the quick rundown for you lazy scrollers: A mad lad named Julius taught Claude to talk like a literal caveman. What does that mean? Instead of giving you a full-blown thesis, it throws raw keywords at your face. The result? It grabbed 24.9K stars on GitHub and became the most useful meme in developer tooling today.
Let's check out the unga-bunga features:
L42: 🔴 bug: user null. Add guard. No BS.CLAUDE.md into caveman-speak, saving ~46% of input tokens every session.Here's a before-and-after to show you the massive difference:
Naturally, the internet has opinions on this "de-evolution":
AGENTS.md to load the caveman skill on the very first message.Let's be real. LLMs are verbose by default to please the general public. But for us devs, "Talk is cheap, show me the code." Forcing your AI to strip the polite fluff doesn't just save you money and rate limits; it actually helps you focus on the logic without drowning in text.
There's even a wild stat (citing a supposed March 2026 paper—time travelers, eh?) claiming that brevity constraints actually improve AI accuracy by 26% on certain benchmarks. Verbose is not always better.
Bottom line: This is a must-have survival tool for anyone hitting usage limits. Let the AI do the heavy lifting, and tell it to shut up about it.
Sauce: Product Hunt