# Qwen2-VL In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on Qwen2-VL models on [Intel GPUs](../../../README.md). For illustration purposes, we utilize the [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) (or [Qwen/Qwen2-VL-7B-Instruct](https://www.modelscope.cn/models/Qwen/Qwen2-VL-7B-Instruct) for ModelScope) as a reference Qwen2-VL model. ## 0. Requirements To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information. ## Example: Predict Tokens using `generate()` API In the example [generate.py](./generate.py), we show a basic use case for a Qwen2-VL model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations on Intel GPUs. ### 1. Install #### 1.1 Installation on Linux We suggest using conda to manage environment: ```bash conda create -n llm python=3.11 conda activate llm # below command will install intel_extension_for_pytorch==2.1.10+xpu as default pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ pip install transformers==4.45.0 # install transformers which supports Qwen2-VL pip install accelerate==0.33.0 pip install qwen_vl_utils pip install "trl<0.12.0" # [optional] only needed if you would like to use ModelScope as model hub pip install modelscope[datasets]==1.21.1 ``` #### 1.2 Installation on Windows We suggest using conda to manage environment: ```bash conda create -n llm python=3.11 libuv conda activate llm # below command will install intel_extension_for_pytorch==2.1.10+xpu as default pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ pip install transformers==4.45.0 # install transformers which supports Qwen2-VL pip install accelerate==0.33.0 pip install qwen_vl_utils pip install "trl<0.12.0" # [optional] only needed if you would like to use ModelScope as model hub pip install modelscope[datasets]==1.21.1 ``` ### 2. Configures OneAPI environment variables for Linux > [!NOTE] > Skip this step if you are running on Windows. This is a required step on Linux for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI. ```bash source /opt/intel/oneapi/setvars.sh ``` ### 3. Runtime Configurations For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device. #### 3.1 Configurations for Linux
For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series ```bash export USE_XETLA=OFF export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 export SYCL_CACHE_PERSISTENT=1 ```
For Intel Data Center GPU Max Series ```bash export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 export SYCL_CACHE_PERSISTENT=1 export ENABLE_SDP_FUSION=1 ``` > Note: Please note that `libtcmalloc.so` can be installed by `conda install -c conda-forge -y gperftools=2.10`.
For Intel iGPU ```bash export SYCL_CACHE_PERSISTENT=1 ```
#### 3.2 Configurations for Windows
For Intel iGPU and Intel Arc™ A-Series Graphics ```cmd set SYCL_CACHE_PERSISTENT=1 ```
> [!NOTE] > For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile. ### 4. Running examples ```bash # for Hugging Face model hub python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT --image-url-or-path IMAGE_URL_OR_PATH # for ModelScope model hub python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT --image-url-or-path IMAGE_URL_OR_PATH --modelscope ``` Arguments info: - `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the **Hugging Face** or **ModelScope** repo id for the Qwen2-VL model (e.g. `Qwen/Qwen2-VL-7B-Instruct`) to be downloaded, or the path to the checkpoint folder. It is default to be `'Qwen/Qwen2-VL-7B-Instruct'`. - `--image-url-or-path IMAGE_URL_OR_PATH`: argument defining the image to be infered. It is default to be `'http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg'`. - `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'Describe this image.'`. - `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`. - `--modelscope`: using **ModelScope** as model hub instead of **Hugging Face**. #### Sample Output ##### [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) ```log Inference time: xxxx s -------------------- Input Image -------------------- http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg -------------------- Prompt -------------------- 图片里有什么? -------------------- Output -------------------- 图片里有一个小女孩,她穿着粉红色的条纹连衣裙,手里拿着一个白色的毛绒玩具。背景中有一堵石墙和一些 ``` ```log Inference time: xxxx s -------------------- Input Image -------------------- http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg -------------------- Prompt -------------------- What is in the image? -------------------- Output -------------------- The image shows a young child holding a small white teddy bear dressed in a pink outfit. The child is standing in front of a stone wall with red flowers ``` The sample input image is (which is fetched from [COCO dataset](https://cocodataset.org/#explore?id=264959)):