ipex-llm/docker/llm/inference/cpu/docker/README.md
2023-11-13 16:53:40 +08:00

68 lines
No EOL
1.7 KiB
Markdown

## Build/Use BigDL-LLM cpu image
### Build Image
```bash
docker build \
--build-arg http_proxy=.. \
--build-arg https_proxy=.. \
--build-arg no_proxy=.. \
--rm --no-cache -t intelanalytics/bigdl-llm-cpu:2.5.0-SNAPSHOT .
```
### Use the image for doing cpu inference
An example could be:
```bash
#/bin/bash
export DOCKER_IMAGE=intelanalytics/bigdl-llm-cpu:2.5.0-SNAPSHOT
sudo docker run -itd \
--net=host \
--cpuset-cpus="0-47" \
--cpuset-mems="0" \
--memory="32G" \
--name=CONTAINER_NAME \
--shm-size="16g" \
$DOCKER_IMAGE
```
After the container is booted, you could get into the container through `docker exec`.
To run inference using `BigDL-LLM` using cpu, you could refer to this [documentation](https://github.com/intel-analytics/BigDL/tree/main/python/llm#cpu-int4).
### Use chat.py
chat.py can be used to initiate a conversation with a specified model. The file is under directory '/llm'.
You can download models and bind the model directory from host machine to container when start a container.
Here is an example:
```bash
export DOCKER_IMAGE=intelanalytics/bigdl-llm-cpu:2.5.0-SNAPSHOT
export MODEL_PATH=/home/llm/models
sudo docker run -itd \
--net=host \
--cpuset-cpus="0-47" \
--cpuset-mems="0" \
--memory="32G" \
--name=CONTAINER_NAME \
--shm-size="16g" \
-v $MODEL_PATH:/llm/models/
$DOCKER_IMAGE
```
After entering the container through `docker exec`, you can run chat.py by:
```bash
cd /llm
python chat.py --model-path YOUR_MODEL_PATH
```
In the example above, it can be:
```bash
cd /llm
python chat.py --model-path /llm/models/MODEL_NAME
```