This is an automated archive made by the Lemmit Bot.
The original was posted on /r/homeassistant by /u/citrusalex on 2025-02-21 12:56:04+00:00.
Like many people using Home Assistant I have a home server with the cheapo Intel Arc A380 for Jellyfin transcoding that otherwise does nothing, so I whipped up a docker compose to easily run Intel GPU-accelerated speech-to-text using whisper.cpp:
Initial request will take some time but after that, on my A380, short requests in English like “Turn off kitchen lights” get processed in ~1 second using the large-v2
Whisper model.
speech-to-phrase
can be better (although it depends on audio quality) if you are using only the default conversation agent, but since whisper transcripts any speech, it could be useful when paired together with LLMs, especially local ones in Prefer handling commands locally
mode.
I imagine something like the budget Arc B580 should be able to run both whisper and a model like llama3.1
or qwen2.5
at the same time (using the ipex image) at a decent speed.