modify README
This commit is contained in:
parent
3814abf95a
commit
a1aefdb8f4
1 changed files with 25 additions and 1 deletions
|
|
@ -37,8 +37,32 @@ To run inference using `BigDL-LLM` using cpu, you could refer to this [documenta
|
|||
|
||||
chat.py can be used to initiate a conversation with a specified model. The file is under directory '/root'.
|
||||
|
||||
To run chat.py:
|
||||
You can download models and bind the model directory from host machine to container when start a container.
|
||||
|
||||
Here is an example:
|
||||
```bash
|
||||
export DOCKER_IMAGE=intelanalytics/bigdl-llm-cpu:2.4.0-SNAPSHOT
|
||||
export MODEL_PATH=/home/llm/models
|
||||
|
||||
sudo docker run -itd \
|
||||
--net=host \
|
||||
--cpuset-cpus="0-47" \
|
||||
--cpuset-mems="0" \
|
||||
--memory="32G" \
|
||||
--name=CONTAINER_NAME \
|
||||
--shm-size="16g" \
|
||||
-v $MODEL_PATH:/llm/models/
|
||||
$DOCKER_IMAGE
|
||||
|
||||
```
|
||||
|
||||
After entering the container through `docker exec`, you can run chat.py by:
|
||||
```bash
|
||||
cd /root
|
||||
python chat.py --model-path YOUR_MODEL_PATH
|
||||
```
|
||||
In the example above, it can be:
|
||||
```bash
|
||||
cd /root
|
||||
python chat.py --model-path /llm/models/MODEL_NAME
|
||||
```
|
||||
Loading…
Reference in a new issue