add --entrypoint /bin/bash (#12957)

Co-authored-by: gc-fu <guancheng.fu@intel.com>
This commit is contained in:
Shaojun Liu 2025-03-10 10:10:27 +08:00 committed by GitHub
parent 2a8f624f4b
commit 6a2d87e40f
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
6 changed files with 12 additions and 3 deletions

View file

@ -26,7 +26,7 @@ To map the `XPU` into the container, you need to specify `--device=/dev/dri` whe
```bash
#/bin/bash
export DOCKER_IMAGE=intelanalytics/ipex-llm-xpu:2.2.0-SNAPSHOT
export DOCKER_IMAGE=intelanalytics/ipex-llm-serving-xpu:2.2.0-SNAPSHOT
sudo docker run -itd \
--net=host \
@ -34,6 +34,7 @@ sudo docker run -itd \
--memory="32G" \
--name=CONTAINER_NAME \
--shm-size="16g" \
--entrypoint /bin/bash \
$DOCKER_IMAGE
```
@ -71,7 +72,7 @@ By default, the container is configured to automatically start the service when
```bash
#/bin/bash
export DOCKER_IMAGE=intelanalytics/ipex-llm-xpu:2.2.0-SNAPSHOT
export DOCKER_IMAGE=intelanalytics/ipex-llm-serving-xpu:2.2.0-SNAPSHOT
sudo docker run -itd \
--net=host \
@ -110,7 +111,7 @@ If you prefer to manually start the service or need to troubleshoot, you can ove
```bash
#/bin/bash
export DOCKER_IMAGE=intelanalytics/ipex-llm-xpu:2.2.0-SNAPSHOT
export DOCKER_IMAGE=intelanalytics/ipex-llm-serving-xpu:2.2.0-SNAPSHOT
sudo docker run -itd \
--net=host \

View file

@ -32,6 +32,7 @@ Start ipex-llm-xpu Docker Container. Choose one of the following commands to sta
--name=$CONTAINER_NAME \
--shm-size="16g" \
-v $MODEL_PATH:/llm/models \
--entrypoint /bin/bash \
$DOCKER_IMAGE
```
@ -52,6 +53,7 @@ Start ipex-llm-xpu Docker Container. Choose one of the following commands to sta
--shm-size="16g" \
-v $MODEL_PATH:/llm/llm-models \
-v /usr/lib/wsl:/usr/lib/wsl \
--entrypoint /bin/bash \
$DOCKER_IMAGE
```

View file

@ -66,6 +66,7 @@ Start ipex-llm-serving-xpu Docker Container. Choose one of the following command
--name=$CONTAINER_NAME \
--shm-size="16g" \
-v $MODEL_PATH:/llm/models \
--entrypoint /bin/bash \
$DOCKER_IMAGE
```
@ -86,6 +87,7 @@ Start ipex-llm-serving-xpu Docker Container. Choose one of the following command
--shm-size="16g" \
-v $MODEL_PATH:/llm/llm-models \
-v /usr/lib/wsl:/usr/lib/wsl \
--entrypoint /bin/bash \
$DOCKER_IMAGE
```

View file

@ -29,6 +29,7 @@ sudo docker run -itd \
--memory="32G" \
--name=$CONTAINER_NAME \
--shm-size="16g" \
--entrypoint /bin/bash \
$DOCKER_IMAGE
```

View file

@ -32,6 +32,7 @@ sudo docker run -itd \
--memory="32G" \
--name=$CONTAINER_NAME \
--shm-size="16g" \
--entrypoint /bin/bash \
$DOCKER_IMAGE
```
@ -855,6 +856,7 @@ We can set up model serving using `IPEX-LLM` as backend using FastChat, the foll
-e http_proxy=... \
-e https_proxy=... \
-e no_proxy="127.0.0.1,localhost" \
--entrypoint /bin/bash \
$DOCKER_IMAGE
```

View file

@ -64,6 +64,7 @@ python eval.py \
-e http_proxy=$HTTP_PROXY \
-e https_proxy=$HTTPS_PROXY \
--shm-size="16g" \
--entrypoint /bin/bash \
$DOCKER_IMAGE
```