add README.md (#9004)
This commit is contained in:
parent
b3cad7de57
commit
3913ba4577
1 changed files with 45 additions and 0 deletions
45
docker/llm/inference/xpu/docker/README.md
Normal file
45
docker/llm/inference/xpu/docker/README.md
Normal file
|
|
@ -0,0 +1,45 @@
|
||||||
|
## Build/Use BigDL-LLM xpu image
|
||||||
|
|
||||||
|
### Build Image
|
||||||
|
```bash
|
||||||
|
docker build \
|
||||||
|
--build-arg http_proxy=.. \
|
||||||
|
--build-arg https_proxy=.. \
|
||||||
|
--build-arg no_proxy=.. \
|
||||||
|
--rm --no-cache -t intelanalytics/bigdl-llm-xpu:2.4.0-SNAPSHOT .
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
### Use the image for doing xpu inference
|
||||||
|
|
||||||
|
To map the `xpu` into the cotainer, you need to specify `--device=/dev/dri` when booting the container.
|
||||||
|
|
||||||
|
An example could be:
|
||||||
|
```bash
|
||||||
|
#/bin/bash
|
||||||
|
export DOCKER_IMAGE=intelanalytics/bigdl-llm-xpu:2.4.0-SNAPSHOT
|
||||||
|
|
||||||
|
sudo docker run -itd \
|
||||||
|
--net=host \
|
||||||
|
--device=/dev/dri \
|
||||||
|
--memory="32G" \
|
||||||
|
--name=CONTAINER_NAME \
|
||||||
|
--shm-size="16g" \
|
||||||
|
$DOCKER_IMAGE
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
After the container is booted, you could get into the container through `docker exec`.
|
||||||
|
|
||||||
|
To verify the device is successfully mapped into the container, run `sycl-ls` to check the result. In a machine with Arc A770, the sampled output is:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
root@arda-arc12:/# sycl-ls
|
||||||
|
[opencl:acc:0] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device 1.2 [2023.16.7.0.21_160000]
|
||||||
|
[opencl:cpu:1] Intel(R) OpenCL, 13th Gen Intel(R) Core(TM) i9-13900K 3.0 [2023.16.7.0.21_160000]
|
||||||
|
[opencl:gpu:2] Intel(R) OpenCL Graphics, Intel(R) Arc(TM) A770 Graphics 3.0 [23.17.26241.33]
|
||||||
|
[ext_oneapi_level_zero:gpu:0] Intel(R) Level-Zero, Intel(R) Arc(TM) A770 Graphics 1.3 [1.3.26241]
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
To run inference using `BigDL-LLM` using xpu, you could refer to this [documentation](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu).
|
||||||
Loading…
Reference in a new issue