This is an automated archive made by the Lemmit Bot.
The original was posted on /r/selfhosted by /u/ShinyAnkleBalls on 2025-01-27 22:08:37+00:00.
Hi there, I keep seeing mor and more posts about running Deepseek R1 locally. Some claim you can do it using Ollama with a few GB of ram.
You can’t run THE Deepseek R1 with Ollama. If you install Ollama and select Deepseek R1, what you are getting and using are the much much smaller and much much less performant distilled models. They are effectively fine tunes of different existing models (Qwen2.5, Llama, etc.) using data generated using Deepseek R1. They are great, but not THE R1 OpenAI is scared of.
I don’t know why Ollama decides to call these models Deepseek R1, but it’s problematic. Running the actual Deepseek R1 in q4 requires more than 400GB of VRAM or RAM depending on how long are are willing to sit there waiting for an answer…