Run Deepseek R1 Locally with Docker and Open WebUI

February 2, 2025

Deepseek R1 is a powerful AI model that you can run on your own machine instead of relying on cloud services. This means you keep full control over your data while getting a ChatGPT-like experience.

What You Need

  • Docker installed on your system
  • WSL (Windows Subsystem for Linux) if you're on Windows
  • Ollama for downloading and running models

Step 1: Install Ollama

First, install Ollama by following their official installation guide.

Step 2: Run the Deepseek R1 Model

Once Ollama is installed, you can download and run Deepseek R1 with just one command:

ollama run deepseek-r1:1.5b

You can replace 1.5b with any version you want to use. Check out the Deepseek documentation for the latest versions.

Step 3: Add a Beautiful UI with Open WebUI

To make things look nicer and interact with the model like a real chatbot, you can run Open WebUI using Docker:

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Now, just open your browser and go to:

http://localhost:3000

That’s It!

With just a couple of commands, you now have Deepseek R1 running locally with a clean web interface. No complicated setup—just fast, private AI on your own machine.

Im currently running it on my rpi (raspberry 5), on raspbian os lite. It maxes out the cpu so I'm thinking of getting a gpu for it to try it out. performance of the model r1 1.5b on the rpi depeneds on the the prompt, sometimes it can get slow and other times it might me as fast as the cloud version of it.

🔗 Useful Links: