update docker image tag to 2.2.0-SNAPSHOT (#11904)

This commit is contained in:
Shaojun Liu 2024-08-23 13:57:41 +08:00 committed by GitHub
parent 650e6e6ce4
commit 4cf640c548
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
15 changed files with 54 additions and 54 deletions

View file

@ -24,9 +24,9 @@ on:
# - ipex-llm-finetune-qlora-cpu-k8s # - ipex-llm-finetune-qlora-cpu-k8s
# - ipex-llm-finetune-xpu # - ipex-llm-finetune-xpu
# tag: # tag:
# description: 'docker image tag (e.g. 2.1.0-SNAPSHOT)' # description: 'docker image tag (e.g. 2.2.0-SNAPSHOT)'
# required: true # required: true
# default: '2.1.0-SNAPSHOT' # default: '2.2.0-SNAPSHOT'
# type: string # type: string
workflow_call: workflow_call:
inputs: inputs:
@ -40,9 +40,9 @@ on:
default: 'all' default: 'all'
type: string type: string
tag: tag:
description: 'docker image tag (e.g. 2.1.0-SNAPSHOT)' description: 'docker image tag (e.g. 2.2.0-SNAPSHOT)'
required: true required: true
default: '2.1.0-SNAPSHOT' default: '2.2.0-SNAPSHOT'
type: string type: string
public: public:
description: "if the docker image push to public docker hub" description: "if the docker image push to public docker hub"

View file

@ -13,20 +13,20 @@ You can run IPEX-LLM containers (via docker or k8s) for inference, serving and f
#### Pull a IPEX-LLM Docker Image #### Pull a IPEX-LLM Docker Image
To pull IPEX-LLM Docker images from [Docker Hub](https://hub.docker.com/u/intelanalytics), use the `docker pull` command. For instance, to pull the CPU inference image: To pull IPEX-LLM Docker images from [Docker Hub](https://hub.docker.com/u/intelanalytics), use the `docker pull` command. For instance, to pull the CPU inference image:
```bash ```bash
docker pull intelanalytics/ipex-llm-cpu:2.1.0-SNAPSHOT docker pull intelanalytics/ipex-llm-cpu:2.2.0-SNAPSHOT
``` ```
Available images in hub are: Available images in hub are:
| Image Name | Description | | Image Name | Description |
| --- | --- | | --- | --- |
| intelanalytics/ipex-llm-cpu:2.1.0-SNAPSHOT | CPU Inference | | intelanalytics/ipex-llm-cpu:2.2.0-SNAPSHOT | CPU Inference |
| intelanalytics/ipex-llm-xpu:2.1.0-SNAPSHOT | GPU Inference | | intelanalytics/ipex-llm-xpu:2.2.0-SNAPSHOT | GPU Inference |
| intelanalytics/ipex-llm-serving-cpu:2.1.0-SNAPSHOT | CPU Serving| | intelanalytics/ipex-llm-serving-cpu:2.2.0-SNAPSHOT | CPU Serving|
| intelanalytics/ipex-llm-serving-xpu:2.1.0-SNAPSHOT | GPU Serving| | intelanalytics/ipex-llm-serving-xpu:2.2.0-SNAPSHOT | GPU Serving|
| intelanalytics/ipex-llm-finetune-qlora-cpu-standalone:2.1.0-SNAPSHOT | CPU Finetuning via Docker| | intelanalytics/ipex-llm-finetune-qlora-cpu-standalone:2.2.0-SNAPSHOT | CPU Finetuning via Docker|
| intelanalytics/ipex-llm-finetune-qlora-cpu-k8s:2.1.0-SNAPSHOT|CPU Finetuning via Kubernetes| | intelanalytics/ipex-llm-finetune-qlora-cpu-k8s:2.2.0-SNAPSHOT|CPU Finetuning via Kubernetes|
| intelanalytics/ipex-llm-finetune-qlora-xpu:2.1.0-SNAPSHOT| GPU Finetuning| | intelanalytics/ipex-llm-finetune-qlora-xpu:2.2.0-SNAPSHOT| GPU Finetuning|
#### Run a Container #### Run a Container
Use `docker run` command to run an IPEX-LLM docker container. For detailed instructions, refer to the [IPEX-LLM Docker Container Guides](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/DockerGuides/index.html). Use `docker run` command to run an IPEX-LLM docker container. For detailed instructions, refer to the [IPEX-LLM Docker Container Guides](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/DockerGuides/index.html).

View file

@ -30,14 +30,14 @@ This guide provides step-by-step instructions for installing and using IPEX-LLM
Run the following command to pull image: Run the following command to pull image:
```bash ```bash
docker pull intelanalytics/ipex-llm-cpu:2.1.0-SNAPSHOT docker pull intelanalytics/ipex-llm-cpu:2.2.0-SNAPSHOT
``` ```
### 2. Start bigdl-llm-cpu Docker Container ### 2. Start bigdl-llm-cpu Docker Container
```bash ```bash
#/bin/bash #/bin/bash
export DOCKER_IMAGE=intelanalytics/ipex-llm-cpu:2.1.0-SNAPSHOT export DOCKER_IMAGE=intelanalytics/ipex-llm-cpu:2.2.0-SNAPSHOT
export CONTAINER_NAME=my_container export CONTAINER_NAME=my_container
export MODEL_PATH=/llm/models[change to your model path] export MODEL_PATH=/llm/models[change to your model path]
@ -156,7 +156,7 @@ Additionally, for examples related to Inference with Speculative Decoding, you c
Run the following command to pull image from dockerhub: Run the following command to pull image from dockerhub:
```bash ```bash
docker pull intelanalytics/ipex-llm-xpu:2.1.0-SNAPSHOT docker pull intelanalytics/ipex-llm-xpu:2.2.0-SNAPSHOT
``` ```
### 2. Start Chat Inference ### 2. Start Chat Inference
@ -167,7 +167,7 @@ To map the xpu into the container, you need to specify --device=/dev/dri when bo
```bash ```bash
#/bin/bash #/bin/bash
export DOCKER_IMAGE=intelanalytics/ipex-llm-xpu:2.1.0-SNAPSHOT export DOCKER_IMAGE=intelanalytics/ipex-llm-xpu:2.2.0-SNAPSHOT
export CONTAINER_NAME=my_container export CONTAINER_NAME=my_container
export MODEL_PATH=/llm/models[change to your model path] export MODEL_PATH=/llm/models[change to your model path]
@ -189,7 +189,7 @@ Execute a quick performance benchmark by starting the ipex-llm-xpu container, sp
To map the XPU into the container, specify `--device=/dev/dri` when booting the container. To map the XPU into the container, specify `--device=/dev/dri` when booting the container.
```bash ```bash
#/bin/bash #/bin/bash
export DOCKER_IMAGE=intelanalytics/ipex-llm-xpu:2.1.0-SNAPSHOT export DOCKER_IMAGE=intelanalytics/ipex-llm-xpu:2.2.0-SNAPSHOT
export CONTAINER_NAME=my_container export CONTAINER_NAME=my_container
export MODEL_PATH=/llm/models [change to your model path] export MODEL_PATH=/llm/models [change to your model path]
@ -226,7 +226,7 @@ IPEX-LLM is integrated into FastChat so that user can use IPEX-LLM as a serving
Run the following command: Run the following command:
```bash ```bash
docker pull intelanalytics/ipex-llm-serving-cpu:2.1.0-SNAPSHOT docker pull intelanalytics/ipex-llm-serving-cpu:2.2.0-SNAPSHOT
``` ```
### 2. Start ipex-llm-serving-cpu Docker Container ### 2. Start ipex-llm-serving-cpu Docker Container
@ -234,7 +234,7 @@ docker pull intelanalytics/ipex-llm-serving-cpu:2.1.0-SNAPSHOT
Please be noted that the CPU config is specified for Xeon CPUs, change it accordingly if you are not using a Xeon CPU. Please be noted that the CPU config is specified for Xeon CPUs, change it accordingly if you are not using a Xeon CPU.
```bash ```bash
export DOCKER_IMAGE=intelanalytics/ipex-llm-serving-cpu:2.1.0-SNAPSHOT export DOCKER_IMAGE=intelanalytics/ipex-llm-serving-cpu:2.2.0-SNAPSHOT
export CONTAINER_NAME=my_container export CONTAINER_NAME=my_container
export MODEL_PATH=/llm/models[change to your model path] export MODEL_PATH=/llm/models[change to your model path]
@ -349,7 +349,7 @@ IPEX-LLM is integrated into FastChat so that user can use IPEX-LLM as a serving
Run the following command: Run the following command:
```bash ```bash
docker pull intelanalytics/ipex-llm-serving-xpu:2.1.0-SNAPSHOT docker pull intelanalytics/ipex-llm-serving-xpu:2.2.0-SNAPSHOT
``` ```
### 2. Start ipex-llm-serving-xpu Docker Container ### 2. Start ipex-llm-serving-xpu Docker Container
@ -357,7 +357,7 @@ docker pull intelanalytics/ipex-llm-serving-xpu:2.1.0-SNAPSHOT
To map the `xpu` into the container, you need to specify `--device=/dev/dri` when booting the container. To map the `xpu` into the container, you need to specify `--device=/dev/dri` when booting the container.
```bash ```bash
export DOCKER_IMAGE=intelanalytics/ipex-llm-serving-xpu:2.1.0-SNAPSHOT export DOCKER_IMAGE=intelanalytics/ipex-llm-serving-xpu:2.2.0-SNAPSHOT
export CONTAINER_NAME=my_container export CONTAINER_NAME=my_container
export MODEL_PATH=/llm/models[change to your model path] export MODEL_PATH=/llm/models[change to your model path]
@ -473,10 +473,10 @@ You can download directly from Dockerhub like:
```bash ```bash
# For standalone # For standalone
docker pull intelanalytics/ipex-llm-finetune-qlora-cpu-standalone:2.1.0-SNAPSHOT docker pull intelanalytics/ipex-llm-finetune-qlora-cpu-standalone:2.2.0-SNAPSHOT
# For k8s # For k8s
docker pull intelanalytics/ipex-llm-finetune-qlora-cpu-k8s:2.1.0-SNAPSHOT docker pull intelanalytics/ipex-llm-finetune-qlora-cpu-k8s:2.2.0-SNAPSHOT
``` ```
Or build the image from source: Or build the image from source:
@ -489,7 +489,7 @@ export HTTPS_PROXY=your_https_proxy
docker build \ docker build \
--build-arg http_proxy=${HTTP_PROXY} \ --build-arg http_proxy=${HTTP_PROXY} \
--build-arg https_proxy=${HTTPS_PROXY} \ --build-arg https_proxy=${HTTPS_PROXY} \
-t intelanalytics/ipex-llm-finetune-qlora-cpu-standalone:2.1.0-SNAPSHOT \ -t intelanalytics/ipex-llm-finetune-qlora-cpu-standalone:2.2.0-SNAPSHOT \
-f ./Dockerfile . -f ./Dockerfile .
# For k8s # For k8s
@ -499,7 +499,7 @@ export HTTPS_PROXY=your_https_proxy
docker build \ docker build \
--build-arg http_proxy=${HTTP_PROXY} \ --build-arg http_proxy=${HTTP_PROXY} \
--build-arg https_proxy=${HTTPS_PROXY} \ --build-arg https_proxy=${HTTPS_PROXY} \
-t intelanalytics/ipex-llm-finetune-qlora-cpu-k8s:2.1.0-SNAPSHOT \ -t intelanalytics/ipex-llm-finetune-qlora-cpu-k8s:2.2.0-SNAPSHOT \
-f ./Dockerfile.k8s . -f ./Dockerfile.k8s .
``` ```
@ -520,7 +520,7 @@ docker run -itd \
-e https_proxy=${HTTPS_PROXY} \ -e https_proxy=${HTTPS_PROXY} \
-v $BASE_MODE_PATH:/ipex_llm/model \ -v $BASE_MODE_PATH:/ipex_llm/model \
-v $DATA_PATH:/ipex_llm/data/alpaca-cleaned \ -v $DATA_PATH:/ipex_llm/data/alpaca-cleaned \
intelanalytics/ipex-llm-finetune-qlora-cpu-standalone:2.1.0-SNAPSHOT intelanalytics/ipex-llm-finetune-qlora-cpu-standalone:2.2.0-SNAPSHOT
``` ```
The download and mount of base model and data to a docker container demonstrates a standard fine-tuning process. You can skip this step for a quick start, and in this way, the fine-tuning codes will automatically download the needed files: The download and mount of base model and data to a docker container demonstrates a standard fine-tuning process. You can skip this step for a quick start, and in this way, the fine-tuning codes will automatically download the needed files:
@ -534,7 +534,7 @@ docker run -itd \
--name=ipex-llm-fintune-qlora-cpu \ --name=ipex-llm-fintune-qlora-cpu \
-e http_proxy=${HTTP_PROXY} \ -e http_proxy=${HTTP_PROXY} \
-e https_proxy=${HTTPS_PROXY} \ -e https_proxy=${HTTPS_PROXY} \
intelanalytics/ipex-llm-finetune-qlora-cpu-standalone:2.1.0-SNAPSHOT intelanalytics/ipex-llm-finetune-qlora-cpu-standalone:2.2.0-SNAPSHOT
``` ```
However, we do recommend you to handle them manually, because the automatical download can be blocked by Internet access and Huggingface authentication etc. according to different environment, and the manual method allows you to fine-tune in a custom way (with different base model and dataset). However, we do recommend you to handle them manually, because the automatical download can be blocked by Internet access and Huggingface authentication etc. according to different environment, and the manual method allows you to fine-tune in a custom way (with different base model and dataset).
@ -593,7 +593,7 @@ The following shows how to fine-tune LLM with Quantization (QLoRA built on IPEX-
Run the following command: Run the following command:
```bash ```bash
docker pull intelanalytics/ipex-llm-finetune-xpu:2.1.0-SNAPSHOT docker pull intelanalytics/ipex-llm-finetune-xpu:2.2.0-SNAPSHOT
``` ```
### 2. Prepare Base Model, Data and Start Docker Container ### 2. Prepare Base Model, Data and Start Docker Container
@ -606,7 +606,7 @@ export DATA_PATH=your_downloaded_data_path
export HTTP_PROXY=your_http_proxy export HTTP_PROXY=your_http_proxy
export HTTPS_PROXY=your_https_proxy export HTTPS_PROXY=your_https_proxy
export CONTAINER_NAME=my_container export CONTAINER_NAME=my_container
export DOCKER_IMAGE=intelanalytics/ipex-llm-finetune-xpu:2.1.0-SNAPSHOT export DOCKER_IMAGE=intelanalytics/ipex-llm-finetune-xpu:2.2.0-SNAPSHOT
docker run -itd \ docker run -itd \
--net=host \ --net=host \

View file

@ -5,7 +5,7 @@
You can download directly from Dockerhub like: You can download directly from Dockerhub like:
```bash ```bash
docker pull intelanalytics/ipex-llm-finetune-lora-cpu:2.1.0-SNAPSHOT docker pull intelanalytics/ipex-llm-finetune-lora-cpu:2.2.0-SNAPSHOT
``` ```
Or build the image from source: Or build the image from source:
@ -17,7 +17,7 @@ export HTTPS_PROXY=your_https_proxy
docker build \ docker build \
--build-arg http_proxy=${HTTP_PROXY} \ --build-arg http_proxy=${HTTP_PROXY} \
--build-arg https_proxy=${HTTPS_PROXY} \ --build-arg https_proxy=${HTTPS_PROXY} \
-t intelanalytics/ipex-llm-finetune-lora-cpu:2.1.0-SNAPSHOT \ -t intelanalytics/ipex-llm-finetune-lora-cpu:2.2.0-SNAPSHOT \
-f ./Dockerfile . -f ./Dockerfile .
``` ```
@ -33,7 +33,7 @@ docker run -itd \
-e WORKER_COUNT_DOCKER=your_worker_count \ -e WORKER_COUNT_DOCKER=your_worker_count \
-v your_downloaded_base_model_path:/ipex_llm/model \ -v your_downloaded_base_model_path:/ipex_llm/model \
-v your_downloaded_data_path:/ipex_llm/data/alpaca_data_cleaned_archive.json \ -v your_downloaded_data_path:/ipex_llm/data/alpaca_data_cleaned_archive.json \
intelanalytics/ipex-llm-finetune-lora-cpu:2.1.0-SNAPSHOT \ intelanalytics/ipex-llm-finetune-lora-cpu:2.2.0-SNAPSHOT \
bash bash
``` ```

View file

@ -1,4 +1,4 @@
imageName: intelanalytics/ipex-llm-finetune-lora-cpu:2.1.0-SNAPSHOT imageName: intelanalytics/ipex-llm-finetune-lora-cpu:2.2.0-SNAPSHOT
trainerNum: 8 trainerNum: 8
microBatchSize: 8 microBatchSize: 8
nfsServerIp: your_nfs_server_ip nfsServerIp: your_nfs_server_ip

View file

@ -1,4 +1,4 @@
imageName: intelanalytics/ipex-llm-finetune-qlora-cpu-k8s:2.1.0-SNAPSHOT imageName: intelanalytics/ipex-llm-finetune-qlora-cpu-k8s:2.2.0-SNAPSHOT
trainerNum: 2 trainerNum: 2
microBatchSize: 8 microBatchSize: 8
enableGradientCheckpoint: false # true will save more memory but increase latency enableGradientCheckpoint: false # true will save more memory but increase latency

View file

@ -19,7 +19,7 @@ With this docker image, we can use all [ipex-llm finetune examples on Intel GPU]
You can download directly from Dockerhub like: You can download directly from Dockerhub like:
```bash ```bash
docker pull intelanalytics/ipex-llm-finetune-xpu:2.1.0-SNAPSHOT docker pull intelanalytics/ipex-llm-finetune-xpu:2.2.0-SNAPSHOT
``` ```
Or build the image from source: Or build the image from source:
@ -31,7 +31,7 @@ export HTTPS_PROXY=your_https_proxy
docker build \ docker build \
--build-arg http_proxy=${HTTP_PROXY} \ --build-arg http_proxy=${HTTP_PROXY} \
--build-arg https_proxy=${HTTPS_PROXY} \ --build-arg https_proxy=${HTTPS_PROXY} \
-t intelanalytics/ipex-llm-finetune-xpu:2.1.0-SNAPSHOT \ -t intelanalytics/ipex-llm-finetune-xpu:2.2.0-SNAPSHOT \
-f ./Dockerfile . -f ./Dockerfile .
``` ```
@ -55,7 +55,7 @@ docker run -itd \
-v $BASE_MODE_PATH:/model \ -v $BASE_MODE_PATH:/model \
-v $DATA_PATH:/data/alpaca-cleaned \ -v $DATA_PATH:/data/alpaca-cleaned \
--shm-size="16g" \ --shm-size="16g" \
intelanalytics/ipex-llm-finetune-xpu:2.1.0-SNAPSHOT intelanalytics/ipex-llm-finetune-xpu:2.2.0-SNAPSHOT
``` ```
The download and mount of base model and data to a docker container demonstrates a standard fine-tuning process. You can skip this step for a quick start, and in this way, the fine-tuning codes will automatically download the needed files: The download and mount of base model and data to a docker container demonstrates a standard fine-tuning process. You can skip this step for a quick start, and in this way, the fine-tuning codes will automatically download the needed files:
@ -72,7 +72,7 @@ docker run -itd \
-e http_proxy=${HTTP_PROXY} \ -e http_proxy=${HTTP_PROXY} \
-e https_proxy=${HTTPS_PROXY} \ -e https_proxy=${HTTPS_PROXY} \
--shm-size="16g" \ --shm-size="16g" \
intelanalytics/ipex-llm-finetune-xpu:2.1.0-SNAPSHOT intelanalytics/ipex-llm-finetune-xpu:2.2.0-SNAPSHOT
``` ```
However, we do recommend you to handle them manually, because the download can be blocked by Internet access and Huggingface authentication etc. according to different environment, and the manual method allows you to fine-tune in a custom way (with different base model and dataset). However, we do recommend you to handle them manually, because the download can be blocked by Internet access and Huggingface authentication etc. according to different environment, and the manual method allows you to fine-tune in a custom way (with different base model and dataset).

View file

@ -6,7 +6,7 @@ docker build \
--build-arg http_proxy=.. \ --build-arg http_proxy=.. \
--build-arg https_proxy=.. \ --build-arg https_proxy=.. \
--build-arg no_proxy=.. \ --build-arg no_proxy=.. \
--rm --no-cache -t intelanalytics/ipex-llm-cpu:2.1.0-SNAPSHOT . --rm --no-cache -t intelanalytics/ipex-llm-cpu:2.2.0-SNAPSHOT .
``` ```
@ -16,7 +16,7 @@ docker build \
An example could be: An example could be:
```bash ```bash
#/bin/bash #/bin/bash
export DOCKER_IMAGE=intelanalytics/ipex-llm-cpu:2.1.0-SNAPSHOT export DOCKER_IMAGE=intelanalytics/ipex-llm-cpu:2.2.0-SNAPSHOT
sudo docker run -itd \ sudo docker run -itd \
--net=host \ --net=host \
@ -41,7 +41,7 @@ You can download models and bind the model directory from host machine to contai
Here is an example: Here is an example:
```bash ```bash
export DOCKER_IMAGE=intelanalytics/ipex-llm-cpu:2.1.0-SNAPSHOT export DOCKER_IMAGE=intelanalytics/ipex-llm-cpu:2.2.0-SNAPSHOT
export MODEL_PATH=/home/llm/models export MODEL_PATH=/home/llm/models
sudo docker run -itd \ sudo docker run -itd \

View file

@ -6,7 +6,7 @@ docker build \
--build-arg http_proxy=.. \ --build-arg http_proxy=.. \
--build-arg https_proxy=.. \ --build-arg https_proxy=.. \
--build-arg no_proxy=.. \ --build-arg no_proxy=.. \
--rm --no-cache -t intelanalytics/ipex-llm-xpu:2.1.0-SNAPSHOT . --rm --no-cache -t intelanalytics/ipex-llm-xpu:2.2.0-SNAPSHOT .
``` ```
@ -17,7 +17,7 @@ To map the `xpu` into the container, you need to specify `--device=/dev/dri` whe
An example could be: An example could be:
```bash ```bash
#/bin/bash #/bin/bash
export DOCKER_IMAGE=intelanalytics/ipex-llm-xpu:2.1.0-SNAPSHOT export DOCKER_IMAGE=intelanalytics/ipex-llm-xpu:2.2.0-SNAPSHOT
sudo docker run -itd \ sudo docker run -itd \
--net=host \ --net=host \

View file

@ -1,4 +1,4 @@
FROM intelanalytics/ipex-llm-cpu:2.1.0-SNAPSHOT FROM intelanalytics/ipex-llm-cpu:2.2.0-SNAPSHOT
ARG http_proxy ARG http_proxy
ARG https_proxy ARG https_proxy

View file

@ -6,7 +6,7 @@ docker build \
--build-arg http_proxy=.. \ --build-arg http_proxy=.. \
--build-arg https_proxy=.. \ --build-arg https_proxy=.. \
--build-arg no_proxy=.. \ --build-arg no_proxy=.. \
--rm --no-cache -t intelanalytics/ipex-llm-serving-cpu:2.1.0-SNAPSHOT . --rm --no-cache -t intelanalytics/ipex-llm-serving-cpu:2.2.0-SNAPSHOT .
``` ```
### Use the image for doing cpu serving ### Use the image for doing cpu serving
@ -16,7 +16,7 @@ You could use the following bash script to start the container. Please be noted
```bash ```bash
#/bin/bash #/bin/bash
export DOCKER_IMAGE=intelanalytics/ipex-llm-serving-cpu:2.1.0-SNAPSHOT export DOCKER_IMAGE=intelanalytics/ipex-llm-serving-cpu:2.2.0-SNAPSHOT
sudo docker run -itd \ sudo docker run -itd \
--net=host \ --net=host \

View file

@ -2,7 +2,7 @@
## Image ## Image
To deploy IPEX-LLM-serving cpu in Kubernetes environment, please use this image: `intelanalytics/ipex-llm-serving-cpu:2.1.0-SNAPSHOT` To deploy IPEX-LLM-serving cpu in Kubernetes environment, please use this image: `intelanalytics/ipex-llm-serving-cpu:2.2.0-SNAPSHOT`
## Before deployment ## Before deployment
@ -73,7 +73,7 @@ spec:
dnsPolicy: "ClusterFirst" dnsPolicy: "ClusterFirst"
containers: containers:
- name: fastchat-controller # fixed - name: fastchat-controller # fixed
image: intelanalytics/ipex-llm-serving-cpu:2.1.0-SNAPSHOT image: intelanalytics/ipex-llm-serving-cpu:2.2.0-SNAPSHOT
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
env: env:
- name: CONTROLLER_HOST # fixed - name: CONTROLLER_HOST # fixed
@ -146,7 +146,7 @@ spec:
dnsPolicy: "ClusterFirst" dnsPolicy: "ClusterFirst"
containers: containers:
- name: fastchat-worker # fixed - name: fastchat-worker # fixed
image: intelanalytics/ipex-llm-serving-cpu:2.1.0-SNAPSHOT image: intelanalytics/ipex-llm-serving-cpu:2.2.0-SNAPSHOT
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
env: env:
- name: CONTROLLER_HOST # fixed - name: CONTROLLER_HOST # fixed

View file

@ -24,7 +24,7 @@ spec:
dnsPolicy: "ClusterFirst" dnsPolicy: "ClusterFirst"
containers: containers:
- name: fastchat-controller # fixed - name: fastchat-controller # fixed
image: intelanalytics/ipex-llm-serving-cpu:2.1.0-SNAPSHOT image: intelanalytics/ipex-llm-serving-cpu:2.2.0-SNAPSHOT
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
env: env:
- name: CONTROLLER_HOST # fixed - name: CONTROLLER_HOST # fixed
@ -91,7 +91,7 @@ spec:
dnsPolicy: "ClusterFirst" dnsPolicy: "ClusterFirst"
containers: containers:
- name: fastchat-worker # fixed - name: fastchat-worker # fixed
image: intelanalytics/ipex-llm-serving-cpu:2.1.0-SNAPSHOT image: intelanalytics/ipex-llm-serving-cpu:2.2.0-SNAPSHOT
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
env: env:
- name: CONTROLLER_HOST # fixed - name: CONTROLLER_HOST # fixed

View file

@ -17,7 +17,7 @@ RUN cd /tmp/ && \
mv /tmp/torch-ccl/dist/oneccl_bind_pt-2.1.100+xpu-cp311-cp311-linux_x86_64.whl /tmp/ mv /tmp/torch-ccl/dist/oneccl_bind_pt-2.1.100+xpu-cp311-cp311-linux_x86_64.whl /tmp/
FROM intelanalytics/ipex-llm-xpu:2.1.0-SNAPSHOT FROM intelanalytics/ipex-llm-xpu:2.2.0-SNAPSHOT
ARG http_proxy ARG http_proxy
ARG https_proxy ARG https_proxy

View file

@ -6,7 +6,7 @@ docker build \
--build-arg http_proxy=.. \ --build-arg http_proxy=.. \
--build-arg https_proxy=.. \ --build-arg https_proxy=.. \
--build-arg no_proxy=.. \ --build-arg no_proxy=.. \
--rm --no-cache -t intelanalytics/ipex-llm-serving-xpu:2.1.0-SNAPSHOT . --rm --no-cache -t intelanalytics/ipex-llm-serving-xpu:2.2.0-SNAPSHOT .
``` ```
@ -18,7 +18,7 @@ To map the `xpu` into the container, you need to specify `--device=/dev/dri` whe
An example could be: An example could be:
```bash ```bash
#/bin/bash #/bin/bash
export DOCKER_IMAGE=intelanalytics/ipex-llm-serving-xpu:2.1.0-SNAPSHOT export DOCKER_IMAGE=intelanalytics/ipex-llm-serving-xpu:2.2.0-SNAPSHOT
sudo docker run -itd \ sudo docker run -itd \
--net=host \ --net=host \