Coding4Food LogoCoding4Food
HomeCategoriesBookmarks
vi
Coding4Food LogoCoding4Food
HomeCategoriesBookmarks
Privacy|Terms

© 2026 Coding4Food. Written by devs, for devs.

All news
AI & AutomationIT Drama

Ex-Manus Backend Lead Drops a Bomb: Stop Using Function Calling for AI Agents, Unix CLI is the Goat

March 13, 20264 min read

Meta just bought Manus, and their former lead dev took to Reddit to expose a hard truth: Bloated JSON function calling is dead. The future of AI agents is bash.

Share this post:
airplane, aircraft, airport, travel, flying, aviation, vacations, passenger aircraft, flight, tourism, airplane, airport, airport, airport, airport, airport
Nguồn gốc: https://coding4food.com/post/ex-manus-lead-drops-function-calling-for-unix-cli-ai-agents. Nội dung thuộc bản quyền Coding4Food. Original source: https://coding4food.com/post/ex-manus-lead-drops-function-calling-for-unix-cli-ai-agents. Content is property of Coding4Food. This content was scraped without permission from https://coding4food.com/post/ex-manus-lead-drops-function-calling-for-unix-cli-ai-agentsNguồn gốc: https://coding4food.com/post/ex-manus-lead-drops-function-calling-for-unix-cli-ai-agents. Nội dung thuộc bản quyền Coding4Food. Original source: https://coding4food.com/post/ex-manus-lead-drops-function-calling-for-unix-cli-ai-agents. Content is property of Coding4Food. This content was scraped without permission from https://coding4food.com/post/ex-manus-lead-drops-function-calling-for-unix-cli-ai-agents
Nguồn gốc: https://coding4food.com/post/ex-manus-lead-drops-function-calling-for-unix-cli-ai-agents. Nội dung thuộc bản quyền Coding4Food. Original source: https://coding4food.com/post/ex-manus-lead-drops-function-calling-for-unix-cli-ai-agents. Content is property of Coding4Food. This content was scraped without permission from https://coding4food.com/post/ex-manus-lead-drops-function-calling-for-unix-cli-ai-agentsNguồn gốc: https://coding4food.com/post/ex-manus-lead-drops-function-calling-for-unix-cli-ai-agents. Nội dung thuộc bản quyền Coding4Food. Original source: https://coding4food.com/post/ex-manus-lead-drops-function-calling-for-unix-cli-ai-agents. Content is property of Coding4Food. This content was scraped without permission from https://coding4food.com/post/ex-manus-lead-drops-function-calling-for-unix-cli-ai-agents
ai agentllmmanus aimetaunix clifunction callingprompt engineering
Share this post:

Bình luận

Related posts

paragraph, law, artificial intelligence, clause, regulations, robot, brain, technology, circuit board, conductor tracks, connection, digital, problems, ethics, concept, artificial intelligence, artificial intelligence, ethics, ethics, ethics, ethics, ethics
IT DramaAI & Automation

Qwen's Lead Dev Bails: The Open-Source AI Drama We Didn't Know We Needed

A core developer behind the Qwen AI model resigns, triggering massive debates on overfitting, local LLM capabilities, and hybrid AI workflows.

Mar 53 min read
Read more →
chess, board, game, chess board, board game, chess pieces, strategy, pawn, king and queen, black and white, monochrome, chess, chess, chess, chess, chess, strategy, strategy, strategy
AI & AutomationTechnology

Ditch the Chat UI! Managing AI Agents via Task Boards is the Real Deal

Chat interfaces for AI are dead. Discover how hooking up an OpenClaw Agent to a Notion task board changed the game. Treat your AI like a remote junior dev!

Mar 123 min read
Read more →
robots, automata, grey, bot, bot, bot, bot, bot, bot
TechnologyAI & Automation

RIP JSON for AI: OpenUI Drops to Save Your Generative UI Dreams

Tired of LLMs spitting out broken JSON for your UI components? OpenUI just launched on Product Hunt, promising 3x faster rendering and 67% fewer tokens.

Mar 123 min read
Read more →
spider web, cobweb, habitat, web, nature, spider web, spider web, spider web, spider web, spider web, web, web, web, nature, nature
AI & AutomationTechnology

Firecrawl CLI: The Missing Antidote for Token-Guzzling AI Agents

Tired of burning LLM tokens on garbage HTML? Firecrawl CLI introduces a file-based scraping approach that lets your AI agents read the web without hallucinating.

Mar 113 min read
Read more →
The AI Wizards Moved the AGI Goalposts Again
AI & AutomationTechnology

The AI Wizards Moved the AGI Goalposts Again

OpenAI and other tech giants are shifting the definition of AGI. Is it to keep the hype train rolling, or a sneaky way to dodge their own charters?

Mar 93 min read
Read more →
ai generated, robot, cyborg, human, artificial intelligence, technology, mysticism, computer science, machine, web, future, modern, diagram, digital, brain, data, digitization, system, stock exchange, business, finance
AI & AutomationTechnology

Stop Building AI Toys: How Copperlane Bagged YC W26 by Solving Boring Mortgage Problems

Copperlane uses an AI agent to clean up the messy mortgage process and secured YC W26 funding. Here's a real lesson for devs: solve boring operational pain points to make bank.

Mar 82 min read
Read more →

So, we all know Meta just dropped a bag to acquire Manus, the hyped AI agent startup. But right in the middle of this hype train, a former backend lead at Manus went on Reddit to drop a massive truth bomb that’s currently blowing up the dev community's minds. After two years of sweating blood over AI agents, his conclusion is delightfully chaotic: Throw your bloated structured function calling in the trash. A single run(command="...") using Unix CLI commands beats the crap out of everything else.

Sounds like heresy, right? But if you read his breakdown, it makes an infuriating amount of sense. Grab your coffee, let's break down this absolute masterclass in practical engineering.

The Plot Twist: Unix was the ultimate AI framework all along

This guy points out a beautifully simple parallel: 50 years ago, Unix creators made a core decision: everything is a text stream. Tiny tools do one thing well, chained together by pipes (|), yelling at each other via stderr, and reporting status via exit codes.

Fast forward 50 years, and what are LLMs? Everything is tokens (text). They think in text, act in text, consume text. So why on earth are we forcing them to context-switch between a massive catalog of typed JSON API tools (search_web, read_file, send_email)?

Instead, he exposes just one tool to the LLM: run(command="..."). Need to read a log and count errors? Instead of three separate function calls, the agent just spits out: run(command="cat /var/log/app.log | grep ERROR | wc -l")

Why is the LLM so good at this? Because it was trained on billions of lines of GitHub repos, CI/CD scripts, and Stack Overflow dumps. You don't need to teach an LLM bash—it's already the ultimate terminal power user.

How to spoon-feed an AI without blowing up your context window

You can't just give an AI a terminal and expect magic. It can't Google things when it gets stuck. The dev used three "heuristic" tricks to make the CLI guide the agent naturally:

1. Progressive --help discovery Don't stuff a 3,000-word API doc into the system prompt. It's context waste. Start by injecting a simple list of available commands. If the agent calls memory without arguments, the system throws an error showing the subcommands: usage: memory search|recent|store. The agent learns on the fly, drilling down only when needed.

2. Error messages as a GPS Traditional CLI errors are meant for humans. For agents, every error must include the fix. Agent tries cat photo.png? Instead of tokenizer garbage, the system intercepts and says: [error] binary image. Use: see photo.png. The agent corrects itself in the very next step.

3. Pavlovian output formatting Append [exit:0 | 12ms] to the end of every output. The LLM quickly internalizes that exit:1 means it messed up, and 45s means the query was expensive, naturally making it smarter about resource usage.

Production War Stories (Or: How my agent lost its mind)

To make this work without breaking Unix pipes, you need two layers: Layer 1 (pure Unix execution) and Layer 2 (LLM presentation). And the production horror stories prove why:

  • The PNG Dumpster Fire: A user uploaded an architecture diagram. The agent tried to read it with cat. 182KB of raw PNG bytes were fed into the tokenizer, generating pure garbage tokens. The agent lost its mind and hallucinated 20 retries before crashing. Fix: Binary guards.
  • 10 Blind Retries: The system was silently dropping stderr if stdout had any content. The agent failed to pip install a package, couldn't see the command not found error, and blindly guessed 10 different package managers like an idiot. Fix: stderr is the holy grail. Never drop it.
  • The 5000-line Context Nuke: Reading a massive log file pushed the entire conversation history out of the context window. Fix: Truncate output to 200 lines, save the rest to a temp file, and tell the agent "Hey, it's truncated. Use grep on this temp file to find what you need."

The Reddit Hivemind Reacts

The thread is a goldmine of devs having "aha" moments:

  • spaceman_ pointed out that Hugging Face's Smolagents did something very similar but restricted the agent to purely writing Python code.
  • johnbbab noted the irony: "The most powerful agent framework might end up looking exactly like the shell."
  • raucousbasilisk nailed it: "JIT natural language to sed awk regex was the true superpower all along."

The TL;DR for your next project

Sometimes, as devs, we love over-engineering things. We build massive, bloated JSON schemas for our AI agents to consume, completely forgetting that the bearded Unix wizards from the 70s already solved the "chaining small tools together via text" problem.

If you're building an agentic workflow, give this CLI approach a shot. You can even grab a Free $300 to test VPS on Vultr to spin up a sandbox environment and let your LLM go wild with bash commands.

Source: Reddit - I was backend lead at Manus. After building agents for 2 years, I stopped using function calling entirely.