Implementing a local AI coding agent is hard

by msvanaon 9/28/2025, 1:00 PMwith 1 comments

by SamInTheShellon 9/28/2025, 5:02 PM

My daily driver is a M1 MBP with 64GB of ram. Using ollama, lm studio, or even just max-lm in python, a model like gpt-oss:20b can produce results. It runs anywhere from 50-80 tps, so don’t expect blazing fast edits, but it’s usable such that you can background it with clear instructions and come back to something that isn’t complete trash.