banner
云野阁

云野阁

闲云野鹤,八方逍遥

Private Deployment of DeepSeek

Preface#

In a Linux environment, deploy ollama + deepseek-r1:1.5b + open-webui using Docker to achieve private deployment of DeepSeek.

Deployment Process#

Install Docker#

Use the script to install Docker and Docker Compose.

 bash <(curl -sSL https://linuxmirrors.cn/docker.sh)

Method 1: Install using Docker command#

# Install ollama
docker run -d -v /data/ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
# Install deepseek-r1:1.5b
docker exec -it ollama ollama run deepseek-r1:1.5b
# Install open-webui
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v /data/openwebui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:v0.6.22

 export HF_ENDPOINT=https://hf-mirror.com

If the open-webui image pulls slowly, you can change the source and modify the image tag. The specific commands are as follows:

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/open-webui/open-webui:v0.6.22
docker tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/ghcr.io/open-webui/open-webui:v0.6.22  ghcr.io/open-webui/open-webui:v0.6.22

Method 2: Install using Compose file#

vi ai.yml
#################################################
services:
  ollama:
    image: ollama/ollama
    container_name: ollama
    ports:
      - "11434:11434"
    volumes:
      - "/data/ollama:/root/.ollama"
    restart: always
    entrypoint: ["sh", "-c", "ollama run deepseek-r1:1.5b & /bin/ollama serve"]
    networks:
      ai:
        ipv4_address: 172.20.110.11
  open-webui:
    image: ghcr.io/open-webui/open-webui:v0.6.22
    container_name: open-webui
    ports:
      - "3000:8080"
    volumes:
      - "/data/openwebui:/app/backend/data"
    extra_hosts:
      - "host.docker.internal:host-gateway"
    restart: always
    networks:
      ai:
        ipv4_address: 172.20.110.12

networks:
 ai:
  driver: bridge
  ipam:
   config:
    - subnet: 172.20.110.0/24
      
########################################################

# Execute deployment
docker compose -f ai.yml up -d

After execution, if the local model deepseek-r1:1.5b does not appear in open webui, enter the command docker logs -f ollama to check the progress of downloading the deepseek-r1:1.5b model by ollama and wait for the progress to complete.

Add Knowledge Base#

(1) In Open WebUI, select Admin Panel - Settings - Documents, enable the Bypass Embedding and Retrieval option and save.

(2) In Workspace - Knowledge Base, create a knowledge base and upload files to the knowledge base.

1

Call Local Knowledge Base#

Enter # in the dialog box to call the corresponding knowledge base, and the large model will reference and respond with relevant content.

2

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.