ipex-llm/docker/llm/inference/cpu/docker
2023-09-26 11:19:13 +08:00
..
Dockerfile Modify Dockerfile 2023-09-26 11:19:13 +08:00
README.md add README.md for bigdl-llm-cpu image (#9026) 2023-09-22 09:03:57 +08:00

Build/Use BigDL-LLM cpu image

Build Image

docker build \
  --build-arg http_proxy=.. \
  --build-arg https_proxy=.. \
  --build-arg no_proxy=.. \
  --rm --no-cache -t intelanalytics/bigdl-llm-cpu:2.4.0-SNAPSHOT .

Use the image for doing cpu inference

An example could be:

#/bin/bash
export DOCKER_IMAGE=intelanalytics/bigdl-llm-cpu:2.4.0-SNAPSHOT

sudo docker run -itd \
        --net=host \
        --cpuset-cpus="0-47" \
        --cpuset-mems="0" \
        --memory="32G" \
        --name=CONTAINER_NAME \
        --shm-size="16g" \
        $DOCKER_IMAGE

After the container is booted, you could get into the container through docker exec.

To run inference using BigDL-LLM using cpu, you could refer to this documentation.