This is an automated archive made by the Lemmit Bot.
The original was posted on /r/homeassistant by /u/InternationalNebula7 on 2025-07-01 19:27:41+00:00.
For those of you on the fence about local LLMs with HA: Gemma 3n was released on Ollama and is absolutely amazing. I’m using gemma3n:e4b to add variety and stylize TTS announcements in home assistant. However, gemma3n:e2b is blistering fast for VPE. I’m getting an impressive 7.4-8.6 tokens/s on CPU only (fourth gen Intel i5) inference. Time to go local!
You must log in or register to comment.