ipex-llm/docker/llm/inference/cpu/docker
Lilac09 b2b085550b Remove bigdl-nano and add ipex into inference-cpu image (#9452)
* remove bigdl-nano and add ipex into inference-cpu image

* remove bigdl-nano in docker

* remove bigdl-nano in docker
2023-11-14 10:50:52 +08:00
..
Dockerfile Remove bigdl-nano and add ipex into inference-cpu image (#9452) 2023-11-14 10:50:52 +08:00
README.md update docker tag to 2.5.0-SNAPSHOT (#9443) 2023-11-13 16:53:40 +08:00
start-notebook.sh add bigdl-llm-tutorial into llm-inference-cpu image (#9139) 2023-10-11 16:41:04 +08:00

Build/Use BigDL-LLM cpu image

Build Image

docker build \
  --build-arg http_proxy=.. \
  --build-arg https_proxy=.. \
  --build-arg no_proxy=.. \
  --rm --no-cache -t intelanalytics/bigdl-llm-cpu:2.5.0-SNAPSHOT .

Use the image for doing cpu inference

An example could be:

#/bin/bash
export DOCKER_IMAGE=intelanalytics/bigdl-llm-cpu:2.5.0-SNAPSHOT

sudo docker run -itd \
        --net=host \
        --cpuset-cpus="0-47" \
        --cpuset-mems="0" \
        --memory="32G" \
        --name=CONTAINER_NAME \
        --shm-size="16g" \
        $DOCKER_IMAGE

After the container is booted, you could get into the container through docker exec.

To run inference using BigDL-LLM using cpu, you could refer to this documentation.

Use chat.py

chat.py can be used to initiate a conversation with a specified model. The file is under directory '/llm'.

You can download models and bind the model directory from host machine to container when start a container.

Here is an example:

export DOCKER_IMAGE=intelanalytics/bigdl-llm-cpu:2.5.0-SNAPSHOT
export MODEL_PATH=/home/llm/models

sudo docker run -itd \
        --net=host \
        --cpuset-cpus="0-47" \
        --cpuset-mems="0" \
        --memory="32G" \
        --name=CONTAINER_NAME \
        --shm-size="16g" \
        -v $MODEL_PATH:/llm/models/
        $DOCKER_IMAGE

After entering the container through docker exec, you can run chat.py by:

cd /llm
python chat.py --model-path YOUR_MODEL_PATH

In the example above, it can be:

cd /llm
python chat.py --model-path /llm/models/MODEL_NAME