This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/Ok-Nefariousness8699 on 2024-09-20 00:32:05+00:00.


If you’ve been debating between using API calls with OpenAI, Claude, or Gemini, versus running a local private AI model, this is the moment to try the local route. Qwen 2.5 paired with Ollama is the first local model I’ve found reliable enough to replace API-driven options. It handles everything smoothly, and I’ve made it my default voice assistant at home. If you’ve been waiting for a local solution that actually works, this is it!

Currently running the default 7b Q4 from ollama :