Merge pull request #9108 from Zhengjin-Wang/main
Add instruction for chat.py in bigdl-llm-cpu
This commit is contained in:
		
						commit
						30e3c196f3
					
				
					 2 changed files with 36 additions and 1 deletions
				
			
		| 
						 | 
				
			
			@ -22,7 +22,8 @@ RUN env DEBIAN_FRONTEND=noninteractive apt-get update && \
 | 
			
		|||
    pip install --pre --upgrade bigdl-llm[all] && \
 | 
			
		||||
    pip install --pre --upgrade bigdl-nano && \
 | 
			
		||||
# Download chat.py script
 | 
			
		||||
    wget -P /root https://raw.githubusercontent.com/intel-analytics/BigDL/main/python/llm/portable-executable/chat.py && \
 | 
			
		||||
    pip install --upgrade colorama && \
 | 
			
		||||
    wget -P /root https://raw.githubusercontent.com/intel-analytics/BigDL/main/python/llm/portable-zip/chat.py && \
 | 
			
		||||
    export PYTHONUNBUFFERED=1
 | 
			
		||||
 | 
			
		||||
ENTRYPOINT ["/bin/bash"]
 | 
			
		||||
| 
						 | 
				
			
			@ -32,3 +32,37 @@ sudo docker run -itd \
 | 
			
		|||
After the container is booted, you could get into the container through `docker exec`.
 | 
			
		||||
 | 
			
		||||
To run inference using `BigDL-LLM` using cpu, you could refer to this [documentation](https://github.com/intel-analytics/BigDL/tree/main/python/llm#cpu-int4).
 | 
			
		||||
 | 
			
		||||
### Use chat.py
 | 
			
		||||
 | 
			
		||||
chat.py can be used to initiate a conversation with a specified model. The file is under directory '/root'.
 | 
			
		||||
 | 
			
		||||
You can download models and bind the model directory from host machine to container when start a container.
 | 
			
		||||
 | 
			
		||||
Here is an example:
 | 
			
		||||
```bash
 | 
			
		||||
export DOCKER_IMAGE=intelanalytics/bigdl-llm-cpu:2.4.0-SNAPSHOT
 | 
			
		||||
export MODEL_PATH=/home/llm/models
 | 
			
		||||
 | 
			
		||||
sudo docker run -itd \
 | 
			
		||||
        --net=host \
 | 
			
		||||
        --cpuset-cpus="0-47" \
 | 
			
		||||
        --cpuset-mems="0" \
 | 
			
		||||
        --memory="32G" \
 | 
			
		||||
        --name=CONTAINER_NAME \
 | 
			
		||||
        --shm-size="16g" \
 | 
			
		||||
        -v $MODEL_PATH:/llm/models/
 | 
			
		||||
        $DOCKER_IMAGE
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
After entering the container through `docker exec`, you can run chat.py by:
 | 
			
		||||
```bash
 | 
			
		||||
cd /root
 | 
			
		||||
python chat.py --model-path YOUR_MODEL_PATH
 | 
			
		||||
```
 | 
			
		||||
In the example above, it can be:
 | 
			
		||||
```bash
 | 
			
		||||
cd /root
 | 
			
		||||
python chat.py --model-path /llm/models/MODEL_NAME
 | 
			
		||||
```
 | 
			
		||||
		Loading…
	
		Reference in a new issue