Running Gemma 4 Locally with MLX
In my previous post, I explored what it looks like to run coding agents with local LLMs on an M3 Ultra Mac Studio. That write-up was mostly about the environment itself: treating a single machine as something closer to personal infrastructure, capable of hosting multiple models and running them continuously