Coding4Food LogoCoding4Food
HomeCategoriesArcadeBookmarks
vi
Coding4Food LogoCoding4Food
HomeCategoriesArcadeBookmarks
Privacy|Terms

© 2026 Coding4Food. Written by devs, for devs.

All news
AI & AutomationTechnology

Ollama v0.19 Drops MLX Bomb: Apple Silicon Users, It's Time to Flex

April 2, 20262 min read

Ollama v0.19 is here with a native MLX rewrite, turning your M-series Mac into a local AI beast. Let's see if the hype is real or just marketing fluff.

Share this post:
processor, micro, technology, microprocessor, laptop, pc, team, cpu, electronics, circuit, board, microchip, digital, computer, electric, core, device, circuits, microprocessor, microprocessor, cpu, cpu, cpu, microchip, microchip, microchip, microchip, microchip
Nguồn gốc: https://coding4food.com/post/ollama-v019-mlx-apple-silicon-local-ai. Nội dung thuộc bản quyền Coding4Food. Original source: https://coding4food.com/post/ollama-v019-mlx-apple-silicon-local-ai. Content is property of Coding4Food. This content was scraped without permission from https://coding4food.com/post/ollama-v019-mlx-apple-silicon-local-aiNguồn gốc: https://coding4food.com/post/ollama-v019-mlx-apple-silicon-local-ai. Nội dung thuộc bản quyền Coding4Food. Original source: https://coding4food.com/post/ollama-v019-mlx-apple-silicon-local-ai. Content is property of Coding4Food. This content was scraped without permission from https://coding4food.com/post/ollama-v019-mlx-apple-silicon-local-ai
Nguồn gốc: https://coding4food.com/post/ollama-v019-mlx-apple-silicon-local-ai. Nội dung thuộc bản quyền Coding4Food. Original source: https://coding4food.com/post/ollama-v019-mlx-apple-silicon-local-ai. Content is property of Coding4Food. This content was scraped without permission from https://coding4food.com/post/ollama-v019-mlx-apple-silicon-local-aiNguồn gốc: https://coding4food.com/post/ollama-v019-mlx-apple-silicon-local-ai. Nội dung thuộc bản quyền Coding4Food. Original source: https://coding4food.com/post/ollama-v019-mlx-apple-silicon-local-ai. Content is property of Coding4Food. This content was scraped without permission from https://coding4food.com/post/ollama-v019-mlx-apple-silicon-local-ai
ollama v0.19mlxapple siliconlocal aimac m-seriesnvfp4kv cache
Share this post:

Bình luận

Related posts

laptop, hands, gadgets, iphone, apple, lens, macbook, mobile phone, smartphone, typing, blogging, flat lay, workspace, laptop, laptop, typing, typing, typing, typing, typing, blogging, blogging, blogging
AI & AutomationDev Life

Got roasted by YC Partner, founder drops DenchClaw: The 'Next.js' of Local AI CRM

Tired of cloud APIs draining your wallet? DenchClaw is a locally hosted AI CRM on OpenClaw that acts like Cursor for your entire Mac. Time to test it.

Mar 263 min read
Read more →
nvidia, gpu, electronics, pcb, board, processor, circuit, chip, computer, power, component, technology, hardware, macro, videocard, high-tech, nvidia, nvidia, nvidia, nvidia, nvidia, gpu
TechnologyTools & Tech Stack

Can I Run AI Locally? Or Will My Rig Catch Fire?

Want to run local LLMs to escape corporate AI APIs? Check out CanIRun.ai first to see if your rig can handle it, or if it'll just melt your GPU.

Mar 143 min read
Read more →
ai generated, processor, cpu, chip, computer, technology, hardware, electronics, gpu, digital
AI & AutomationTools & Tech Stack

LTX Desktop: The 'Free' Local AI Video Editor That Demands a 32GB VRAM Sacrifice

LTX Desktop promises free, open-source, on-device AI video editing. Sounds amazing until you read the hardware requirements. Let's spill the tea.

Mar 83 min read
Read more →

What's up, fellow code monkeys? If you’ve been melting your Mac trying to run local AI models, put down the fire extinguisher. Ollama just dropped v0.19, and it’s basically strapping a rocket engine to Apple Silicon. Let’s cut the marketing BS and see what’s actually under the hood.

The TL;DR: What the Hell Actually Changed?

They didn't just tweak a few configs; they overhauled the whole engine for Mac users:

  • MLX Native: They tore down the Apple Silicon inference and rebuilt it entirely on MLX (Apple's native framework). It fully exploits the unified memory architecture.
  • NVFP4 Support: What does this mean for you? You get local inference that doesn't feel like running on a potato, inching much closer to production parity.
  • Gigabrain KV Cache: The cache got a massive IQ boost. We're talking cache reuse across sessions, smart snapshots, and better eviction. No more painful cold starts when you switch coding contexts.

The Reddit & Product Hunt Echo Chamber

I scoured the comments so you don't have to. Here's what the community is screaming about:

  • The Hype Train: People upgrading from older versions are losing their minds. Running Qwen3.5 on an M4? Devs are saying the speed difference between MLX and the old GGML backend is literally night and day.
  • The Agent Builders: Devs running branching workflows like Claude Code or OpenClaw are praising the tech gods. The cache reuse persists across sessions, which saves RAM and speeds up multi-turn workflows like crazy.
  • The Hardware Testers: Guys with 32GB+ unified memory Macbooks are already pulling the Qwen3.5-35B-A3B NVFP4 model and reporting buttery smooth performance. Meanwhile, the M2 Air and 16GB Mac Mini crowds are cautiously optimistic, hoping this version doesn't drown their memory like v0.18 did.

The C4F Verdict: Is it worth the hype?

Honestly, yes. Moving to MLX to exploit that unified memory architecture is an absolute no-brainer. If you’re building local-first ai tools or just want a coding assistant without paying Big Tech for API calls every 5 seconds, v0.19 is a must-install.

Takeaway? Native optimizations always win. Brute-forcing with generic backends is fine for cross-platform prototypes, but native hardware integration is where the real magic happens. Now excuse me, I have a massive model to pull before my ISP throttles me. Happy coding!


Source: Product Hunt