Add phi-3-vision example (#11156)
* Add phi-3-vision example (HF-Automodels) * fix * fix * fix * Add phi-3-vision CPU example (HF-Automodels) * add in readme * fix * fix * fix * fix * use fp8 for gpu example * remove eval
This commit is contained in:
parent
93146b9433
commit
dcbf4d3d0a
6 changed files with 427 additions and 0 deletions
|
|
@ -198,6 +198,7 @@ Over 50 models have been optimized/verified on `ipex-llm`, including *LLaMA/LLaM
|
||||||
| Ziya-Coding-34B-v1.0 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/ziya) | |
|
| Ziya-Coding-34B-v1.0 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/ziya) | |
|
||||||
| Phi-2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/phi-2) |
|
| Phi-2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/phi-2) |
|
||||||
| Phi-3 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-3) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/phi-3) |
|
| Phi-3 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-3) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/phi-3) |
|
||||||
|
| Phi-3-vision | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-3-vision) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/phi-3-vision) |
|
||||||
| Yuan2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/yuan2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/yuan2) |
|
| Yuan2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/yuan2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/yuan2) |
|
||||||
| Gemma | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/gemma) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/gemma) |
|
| Gemma | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/gemma) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/gemma) |
|
||||||
| DeciLM-7B | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/deciLM-7b) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/deciLM-7b) |
|
| DeciLM-7B | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/deciLM-7b) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/deciLM-7b) |
|
||||||
|
|
|
||||||
|
|
@ -555,6 +555,13 @@ Verified Models
|
||||||
<td>
|
<td>
|
||||||
<a href="https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/HF-Transformers-AutoModels/Model/phi-3">link</a></td>
|
<a href="https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/HF-Transformers-AutoModels/Model/phi-3">link</a></td>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>Phi-3-vision</td>
|
||||||
|
<td>
|
||||||
|
<a href="https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/CPU/HF-Transformers-AutoModels/Model/phi-3-vision">link</a></td>
|
||||||
|
<td>
|
||||||
|
<a href="https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/HF-Transformers-AutoModels/Model/phi-3-vision">link</a></td>
|
||||||
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td>Yuan2</td>
|
<td>Yuan2</td>
|
||||||
<td>
|
<td>
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,90 @@
|
||||||
|
# phi-3-vision
|
||||||
|
|
||||||
|
In this directory, you will find examples on how you could apply IPEX-LLM INT8 optimizations on phi-3-vision models. For illustration purposes, we utilize the [microsoft/Phi-3-vision-128k-instruct](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct) as a reference phi-3-vision model.
|
||||||
|
|
||||||
|
## 0. Requirements
|
||||||
|
To run these examples with IPEX-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
|
||||||
|
|
||||||
|
## Example: Predict Tokens using `generate()` API
|
||||||
|
In the example [generate.py](./generate.py), we show a basic use case for a phi-3-vision model to predict the next N tokens using `generate()` API, with IPEX-LLM INT8 optimizations.
|
||||||
|
### 1. Install
|
||||||
|
We suggest using conda to manage environment:
|
||||||
|
|
||||||
|
On Linux:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
conda create -n llm python=3.11 # recommend to use Python 3.11
|
||||||
|
conda activate llm
|
||||||
|
|
||||||
|
# install ipex-llm with 'all' option
|
||||||
|
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
|
||||||
|
|
||||||
|
pip install pillow torchvision
|
||||||
|
pip install transformers==4.37.0
|
||||||
|
```
|
||||||
|
|
||||||
|
On Windows:
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
conda create -n llm python=3.11
|
||||||
|
conda activate llm
|
||||||
|
|
||||||
|
pip install --pre --upgrade ipex-llm[all]
|
||||||
|
|
||||||
|
pip install pillow torchvision
|
||||||
|
pip install transformers==4.37.0
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Run
|
||||||
|
```
|
||||||
|
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --image-url-or-path IMAGE_URL_OR_PATH --prompt PROMPT --n-predict N_PREDICT
|
||||||
|
```
|
||||||
|
|
||||||
|
Arguments Info:
|
||||||
|
|
||||||
|
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the phi-3-vision model (e.g. `microsoft/Phi-3-vision-128k-instruct`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'microsoft/Phi-3-vision-128k-instruct'`.
|
||||||
|
- `--image-url-or-path IMAGE_URL_OR_PATH`: argument defining the image to be infered. It is default to be `'http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg'`.
|
||||||
|
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is in the image?'`.
|
||||||
|
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
|
||||||
|
|
||||||
|
> **Note**: When loading the model in 8-bit, IPEX-LLM converts linear layers in the model into INT8 format. In theory, a *X*B model saved in 16-bit will requires approximately 2*X* GB of memory for loading, and ~0.5*X* GB memory for further inference.
|
||||||
|
>
|
||||||
|
> Please select the appropriate size of the phi-3-vision model based on the capabilities of your machine.
|
||||||
|
|
||||||
|
#### 2.1 Client
|
||||||
|
On client Windows machines, it is recommended to run directly with full utilization of all cores:
|
||||||
|
```cmd
|
||||||
|
python ./generate.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2.2 Server
|
||||||
|
For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket.
|
||||||
|
|
||||||
|
E.g. on Linux,
|
||||||
|
```bash
|
||||||
|
# set IPEX-LLM env variables
|
||||||
|
source ipex-llm-init
|
||||||
|
|
||||||
|
# e.g. for a server with 48 cores per socket
|
||||||
|
export OMP_NUM_THREADS=48
|
||||||
|
numactl -C 0-47 -m 0 python ./generate.py
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2.3 Sample Output
|
||||||
|
#### [microsoft/Phi-3-vision-128k-instruct](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct)
|
||||||
|
|
||||||
|
```log
|
||||||
|
Inference time: xxxx s
|
||||||
|
-------------------- Prompt --------------------
|
||||||
|
Message: [{'role': 'user', 'content': '<|image_1|>\nWhat is in the image?'}]
|
||||||
|
Image link/path: http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg
|
||||||
|
-------------------- Output --------------------
|
||||||
|
|
||||||
|
|
||||||
|
What is in the image?
|
||||||
|
The image shows a child holding a white teddy bear dressed in a pink dress.
|
||||||
|
```
|
||||||
|
|
||||||
|
The sample input image is (which is fetched from [COCO dataset](https://cocodataset.org/#explore?id=264959)):
|
||||||
|
|
||||||
|
<a href="http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg"><img width=400px src="http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg" ></a>
|
||||||
|
|
@ -0,0 +1,88 @@
|
||||||
|
#
|
||||||
|
# Copyright 2016 The BigDL Authors.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
import os
|
||||||
|
import time
|
||||||
|
import torch
|
||||||
|
import argparse
|
||||||
|
import requests
|
||||||
|
|
||||||
|
from PIL import Image
|
||||||
|
from ipex_llm.transformers import AutoModelForCausalLM
|
||||||
|
from transformers import AutoProcessor
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for phi-3 model')
|
||||||
|
parser.add_argument('--repo-id-or-model-path', type=str, default="microsoft/Phi-3-vision-128k-instruct",
|
||||||
|
help='The huggingface repo id for the phi-3-vision model to be downloaded'
|
||||||
|
', or the path to the huggingface checkpoint folder')
|
||||||
|
parser.add_argument('--image-url-or-path', type=str,
|
||||||
|
default="http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg",
|
||||||
|
help='The URL or path to the image to infer')
|
||||||
|
parser.add_argument('--prompt', type=str, default="What is in the image?",
|
||||||
|
help='Prompt to infer')
|
||||||
|
parser.add_argument('--n-predict', type=int, default=32,
|
||||||
|
help='Max tokens to predict')
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
model_path = args.repo_id_or_model_path
|
||||||
|
image_path = args.image_url_or_path
|
||||||
|
|
||||||
|
# Load model in INT8,
|
||||||
|
# which convert the relevant layers in the model into INT8 format
|
||||||
|
# We here use INT8 instead of INT4 for better output
|
||||||
|
# `_attn_implementation="eager"` is required for phi-3-vision
|
||||||
|
# `modules_to_not_convert=["vision_embed_tokens"]` is for acceleration and is optional
|
||||||
|
model = AutoModelForCausalLM.from_pretrained(model_path,
|
||||||
|
trust_remote_code=True,
|
||||||
|
load_in_low_bit="sym_int8",
|
||||||
|
_attn_implementation="eager",
|
||||||
|
modules_to_not_convert=["vision_embed_tokens"])
|
||||||
|
|
||||||
|
# Load processor
|
||||||
|
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
|
||||||
|
|
||||||
|
# here the message formatting refers to https://huggingface.co/microsoft/Phi-3-vision-128k-instruct#sample-inference-code
|
||||||
|
messages = [
|
||||||
|
{"role": "user", "content": "<|image_1|>\n{prompt}".format(prompt=args.prompt)},
|
||||||
|
]
|
||||||
|
prompt = processor.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
||||||
|
|
||||||
|
if os.path.exists(image_path):
|
||||||
|
image = Image.open(image_path)
|
||||||
|
else:
|
||||||
|
image = Image.open(requests.get(image_path, stream=True).raw)
|
||||||
|
|
||||||
|
# Generate predicted tokens
|
||||||
|
with torch.inference_mode():
|
||||||
|
inputs = processor(prompt, [image], return_tensors="pt")
|
||||||
|
st = time.time()
|
||||||
|
output = model.generate(**inputs,
|
||||||
|
eos_token_id=processor.tokenizer.eos_token_id,
|
||||||
|
num_beams=1,
|
||||||
|
do_sample=False,
|
||||||
|
max_new_tokens=args.n_predict,
|
||||||
|
temperature=0.0)
|
||||||
|
end = time.time()
|
||||||
|
print(f'Inference time: {end-st} s')
|
||||||
|
output_str = processor.decode(output[0],
|
||||||
|
skip_special_tokens=True,
|
||||||
|
clean_up_tokenization_spaces=False)
|
||||||
|
print('-'*20, 'Prompt', '-'*20)
|
||||||
|
print(f'Message: {messages}')
|
||||||
|
print(f'Image link/path: {image_path}')
|
||||||
|
print('-'*20, 'Output', '-'*20)
|
||||||
|
print(output_str)
|
||||||
|
|
@ -0,0 +1,136 @@
|
||||||
|
# phi-3-vision
|
||||||
|
In this directory, you will find examples on how you could apply IPEX-LLM FP8 optimizations on phi-3-vision models on [Intel GPUs](../../../README.md). For illustration purposes, we utilize the [microsoft/Phi-3-vision-128k-instruct](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct) as a reference phi-3-vision model.
|
||||||
|
|
||||||
|
## 0. Requirements
|
||||||
|
To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
|
||||||
|
|
||||||
|
## Example: Predict Tokens using `generate()` API
|
||||||
|
In the example [generate.py](./generate.py), we show a basic use case for a phi-3-vision model to predict the next N tokens using `generate()` API, with IPEX-LLM FP8 optimizations on Intel GPUs.
|
||||||
|
### 1. Install
|
||||||
|
#### 1.1 Installation on Linux
|
||||||
|
We suggest using conda to manage environment:
|
||||||
|
```bash
|
||||||
|
conda create -n llm python=3.11
|
||||||
|
conda activate llm
|
||||||
|
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
|
||||||
|
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
|
||||||
|
|
||||||
|
pip install transformers==4.37.0
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 1.2 Installation on Windows
|
||||||
|
We suggest using conda to manage environment:
|
||||||
|
```bash
|
||||||
|
conda create -n llm python=3.11 libuv
|
||||||
|
conda activate llm
|
||||||
|
|
||||||
|
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
|
||||||
|
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
|
||||||
|
|
||||||
|
pip install transformers==4.37.0
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Configures OneAPI environment variables for Linux
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> Skip this step if you are running on Windows.
|
||||||
|
|
||||||
|
This is a required step on Linux for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
source /opt/intel/oneapi/setvars.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Runtime Configurations
|
||||||
|
For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device.
|
||||||
|
#### 3.1 Configurations for Linux
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary>For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series</summary>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export USE_XETLA=OFF
|
||||||
|
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
|
||||||
|
export SYCL_CACHE_PERSISTENT=1
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary>For Intel Data Center GPU Max Series</summary>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so
|
||||||
|
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
|
||||||
|
export SYCL_CACHE_PERSISTENT=1
|
||||||
|
export ENABLE_SDP_FUSION=1
|
||||||
|
```
|
||||||
|
> Note: Please note that `libtcmalloc.so` can be installed by `conda install -c conda-forge -y gperftools=2.10`.
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary>For Intel iGPU</summary>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export SYCL_CACHE_PERSISTENT=1
|
||||||
|
export BIGDL_LLM_XMX_DISABLED=1
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
#### 3.2 Configurations for Windows
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary>For Intel iGPU</summary>
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
set SYCL_CACHE_PERSISTENT=1
|
||||||
|
set BIGDL_LLM_XMX_DISABLED=1
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary>For Intel Arc™ A-Series Graphics</summary>
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
set SYCL_CACHE_PERSISTENT=1
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.
|
||||||
|
### 4. Running examples
|
||||||
|
|
||||||
|
```
|
||||||
|
python ./generate.py --prompt 'What is in the image?'
|
||||||
|
```
|
||||||
|
|
||||||
|
Arguments info:
|
||||||
|
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the phi-3-vision model (e.g. `microsoft/Phi-3-vision-128k-instruct`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'microsoft/Phi-3-vision-128k-instruct'`.
|
||||||
|
- `--image-url-or-path IMAGE_URL_OR_PATH`: argument defining the image to be infered. It is default to be `'http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg'`.
|
||||||
|
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is in the image?'`.
|
||||||
|
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
|
||||||
|
|
||||||
|
#### Sample Output
|
||||||
|
#### [microsoft/Phi-3-vision-128k-instruct](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct)
|
||||||
|
|
||||||
|
```log
|
||||||
|
Inference time: xxxx s
|
||||||
|
-------------------- Prompt --------------------
|
||||||
|
Message: [{'role': 'user', 'content': '<|image_1|>\nWhat is in the image?'}]
|
||||||
|
Image link/path: http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg
|
||||||
|
-------------------- Output --------------------
|
||||||
|
|
||||||
|
|
||||||
|
What is in the image?
|
||||||
|
The image shows a young girl holding a white teddy bear. She is wearing a pink dress with a heart on it. The background includes a stone
|
||||||
|
```
|
||||||
|
|
||||||
|
The sample input image is (which is fetched from [COCO dataset](https://cocodataset.org/#explore?id=264959)):
|
||||||
|
|
||||||
|
<a href="http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg"><img width=400px src="http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg" ></a>
|
||||||
|
|
@ -0,0 +1,105 @@
|
||||||
|
#
|
||||||
|
# Copyright 2016 The BigDL Authors.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
import os
|
||||||
|
import time
|
||||||
|
import torch
|
||||||
|
import argparse
|
||||||
|
import requests
|
||||||
|
|
||||||
|
from PIL import Image
|
||||||
|
from ipex_llm.transformers import AutoModelForCausalLM
|
||||||
|
from transformers import AutoProcessor
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for phi-3 model')
|
||||||
|
parser.add_argument('--repo-id-or-model-path', type=str, default="microsoft/Phi-3-vision-128k-instruct",
|
||||||
|
help='The huggingface repo id for the phi-3-vision model to be downloaded'
|
||||||
|
', or the path to the huggingface checkpoint folder')
|
||||||
|
parser.add_argument('--image-url-or-path', type=str,
|
||||||
|
default="http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg",
|
||||||
|
help='The URL or path to the image to infer')
|
||||||
|
parser.add_argument('--prompt', type=str, default="What is in the image?",
|
||||||
|
help='Prompt to infer')
|
||||||
|
parser.add_argument('--n-predict', type=int, default=32,
|
||||||
|
help='Max tokens to predict')
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
model_path = args.repo_id_or_model_path
|
||||||
|
image_path = args.image_url_or_path
|
||||||
|
|
||||||
|
# Load model in FP8,
|
||||||
|
# which convert the relevant layers in the model into FP8 format
|
||||||
|
# We here use FP8 instead of INT4 for better output
|
||||||
|
# You could also try `'sym_int4'` for INT4, `'sym_int8'` for INT8 and `'fp6'` for FP6
|
||||||
|
# `_attn_implementation="eager"` is required for phi-3-vision
|
||||||
|
# `modules_to_not_convert=["vision_embed_tokens"]` and `model = model.half()` are for acceleration and are optional
|
||||||
|
|
||||||
|
# When running LLMs on Intel iGPUs for Windows users, we recommend setting `cpu_embedding=True` in the from_pretrained function.
|
||||||
|
# This will allow the memory-intensive embedding layer to utilize the CPU instead of iGPU.
|
||||||
|
model = AutoModelForCausalLM.from_pretrained(model_path,
|
||||||
|
trust_remote_code=True,
|
||||||
|
load_in_low_bit="fp8",
|
||||||
|
_attn_implementation="eager",
|
||||||
|
modules_to_not_convert=["vision_embed_tokens"])
|
||||||
|
model = model.half().to('xpu')
|
||||||
|
|
||||||
|
# Load processor
|
||||||
|
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
|
||||||
|
|
||||||
|
# here the message formatting refers to https://huggingface.co/microsoft/Phi-3-vision-128k-instruct#sample-inference-code
|
||||||
|
messages = [
|
||||||
|
{"role": "user", "content": "<|image_1|>\n{prompt}".format(prompt=args.prompt)},
|
||||||
|
]
|
||||||
|
prompt = processor.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
||||||
|
|
||||||
|
if os.path.exists(image_path):
|
||||||
|
image = Image.open(image_path)
|
||||||
|
else:
|
||||||
|
image = Image.open(requests.get(image_path, stream=True).raw)
|
||||||
|
|
||||||
|
# Generate predicted tokens
|
||||||
|
with torch.inference_mode():
|
||||||
|
# ipex_llm model needs a warmup, then inference time can be accurate
|
||||||
|
inputs = processor(prompt, [image], return_tensors="pt")
|
||||||
|
inputs = inputs.to('xpu')
|
||||||
|
output = model.generate(**inputs,
|
||||||
|
eos_token_id=processor.tokenizer.eos_token_id,
|
||||||
|
num_beams=1,
|
||||||
|
do_sample=False,
|
||||||
|
max_new_tokens=args.n_predict,
|
||||||
|
temperature=0.0)
|
||||||
|
# start inference
|
||||||
|
st = time.time()
|
||||||
|
|
||||||
|
inputs = processor(prompt, [image], return_tensors="pt")
|
||||||
|
inputs = inputs.to('xpu')
|
||||||
|
output = model.generate(**inputs,
|
||||||
|
eos_token_id=processor.tokenizer.eos_token_id,
|
||||||
|
num_beams=1,
|
||||||
|
do_sample=False,
|
||||||
|
max_new_tokens=args.n_predict,
|
||||||
|
temperature=0.0)
|
||||||
|
end = time.time()
|
||||||
|
print(f'Inference time: {end-st} s')
|
||||||
|
output_str = processor.decode(output[0],
|
||||||
|
skip_special_tokens=True,
|
||||||
|
clean_up_tokenization_spaces=False)
|
||||||
|
print('-'*20, 'Prompt', '-'*20)
|
||||||
|
print(f'Message: {messages}')
|
||||||
|
print(f'Image link/path: {image_path}')
|
||||||
|
print('-'*20, 'Output', '-'*20)
|
||||||
|
print(output_str)
|
||||||
Loading…
Reference in a new issue