From 3913ba4577db7efa314059c146bb137861ec2751 Mon Sep 17 00:00:00 2001 From: Guancheng Fu <110874468+gc-fu@users.noreply.github.com> Date: Thu, 21 Sep 2023 10:32:56 +0800 Subject: [PATCH] add README.md (#9004) --- docker/llm/inference/xpu/docker/README.md | 45 +++++++++++++++++++++++ 1 file changed, 45 insertions(+) create mode 100644 docker/llm/inference/xpu/docker/README.md diff --git a/docker/llm/inference/xpu/docker/README.md b/docker/llm/inference/xpu/docker/README.md new file mode 100644 index 00000000..b7a83e96 --- /dev/null +++ b/docker/llm/inference/xpu/docker/README.md @@ -0,0 +1,45 @@ +## Build/Use BigDL-LLM xpu image + +### Build Image +```bash +docker build \ + --build-arg http_proxy=.. \ + --build-arg https_proxy=.. \ + --build-arg no_proxy=.. \ + --rm --no-cache -t intelanalytics/bigdl-llm-xpu:2.4.0-SNAPSHOT . +``` + + +### Use the image for doing xpu inference + +To map the `xpu` into the cotainer, you need to specify `--device=/dev/dri` when booting the container. + +An example could be: +```bash +#/bin/bash +export DOCKER_IMAGE=intelanalytics/bigdl-llm-xpu:2.4.0-SNAPSHOT + +sudo docker run -itd \ + --net=host \ + --device=/dev/dri \ + --memory="32G" \ + --name=CONTAINER_NAME \ + --shm-size="16g" \ + $DOCKER_IMAGE +``` + + +After the container is booted, you could get into the container through `docker exec`. + +To verify the device is successfully mapped into the container, run `sycl-ls` to check the result. In a machine with Arc A770, the sampled output is: + +```bash +root@arda-arc12:/# sycl-ls +[opencl:acc:0] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device 1.2 [2023.16.7.0.21_160000] +[opencl:cpu:1] Intel(R) OpenCL, 13th Gen Intel(R) Core(TM) i9-13900K 3.0 [2023.16.7.0.21_160000] +[opencl:gpu:2] Intel(R) OpenCL Graphics, Intel(R) Arc(TM) A770 Graphics 3.0 [23.17.26241.33] +[ext_oneapi_level_zero:gpu:0] Intel(R) Level-Zero, Intel(R) Arc(TM) A770 Graphics 1.3 [1.3.26241] +``` + + +To run inference using `BigDL-LLM` using xpu, you could refer to this [documentation](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/gpu). \ No newline at end of file