diff --git a/python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-Llama3-V-2_5/README.md b/python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-Llama3-V-2_5/README.md
new file mode 100644
index 00000000..8d88fbb2
--- /dev/null
+++ b/python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-Llama3-V-2_5/README.md
@@ -0,0 +1,135 @@
+# MiniCPM-Llama3-V-2_5
+In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on MiniCPM-Llama3-V-2_5 models on [Intel GPUs](../../../README.md). For illustration purposes, we utilize the [openbmb/MiniCPM-Llama3-V-2_5](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5) as a reference MiniCPM-Llama3-V-2_5 model.
+
+## 0. Requirements
+To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
+
+## Example: Predict Tokens using `chat()` API
+In the example [generate.py](./generate.py), we show a basic use case for a MiniCPM-Llama3-V-2_5 model to predict the next N tokens using `chat()` API, with IPEX-LLM INT4 optimizations on Intel GPUs.
+### 1. Install
+#### 1.1 Installation on Linux
+We suggest using conda to manage environment:
+```bash
+conda create -n llm python=3.11
+conda activate llm
+# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
+pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
+
+pip install transformers==4.41.0 trl
+```
+
+#### 1.2 Installation on Windows
+We suggest using conda to manage environment:
+```bash
+conda create -n llm python=3.11 libuv
+conda activate llm
+
+# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
+pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
+
+pip install transformers==4.41.0 trl
+```
+
+### 2. Configures OneAPI environment variables for Linux
+
+> [!NOTE]
+> Skip this step if you are running on Windows.
+
+This is a required step on Linux for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI.
+
+```bash
+source /opt/intel/oneapi/setvars.sh
+```
+
+### 3. Runtime Configurations
+For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device.
+#### 3.1 Configurations for Linux
+
+
+For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series
+
+```bash
+export USE_XETLA=OFF
+export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
+export SYCL_CACHE_PERSISTENT=1
+```
+
+ 
+
+
+
+For Intel Data Center GPU Max Series
+
+```bash
+export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so
+export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
+export SYCL_CACHE_PERSISTENT=1
+export ENABLE_SDP_FUSION=1
+```
+> Note: Please note that `libtcmalloc.so` can be installed by `conda install -c conda-forge -y gperftools=2.10`.
+ 
+
+
+
+For Intel iGPU
+
+```bash
+export SYCL_CACHE_PERSISTENT=1
+export BIGDL_LLM_XMX_DISABLED=1
+```
+
+ 
+
+#### 3.2 Configurations for Windows
+
+
+For Intel iGPU
+
+```cmd
+set SYCL_CACHE_PERSISTENT=1
+set BIGDL_LLM_XMX_DISABLED=1
+```
+
+ 
+
+
+
+For Intel Arc™ A-Series Graphics
+
+```cmd
+set SYCL_CACHE_PERSISTENT=1
+```
+
+ 
+
+> [!NOTE]
+> For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.
+### 4. Running examples
+
+```
+python ./generate.py --prompt 'What is in the image?'
+```
+
+Arguments info:
+- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the MiniCPM-Llama3-V-2_5 (e.g. `openbmb/MiniCPM-Llama3-V-2_5`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'openbmb/MiniCPM-Llama3-V-2_5'`.
+- `--image-url-or-path IMAGE_URL_OR_PATH`: argument defining the image to be infered. It is default to be `'http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg'`.
+- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is in the image?'`.
+- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
+
+#### Sample Output
+
+#### [openbmb/MiniCPM-Llama3-V-2_5](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5)
+
+```log
+Inference time: xxxx s
+-------------------- Input --------------------
+http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg
+-------------------- Prompt --------------------
+What is in the image?
+-------------------- Output --------------------
+The image features a young child holding a white teddy bear. The teddy bear is dressed in a pink outfit. The child appears to be outdoors, with a stone wall and some red flowers in the background.
+```
+
+The sample input image is (which is fetched from [COCO dataset](https://cocodataset.org/#explore?id=264959)):
+
+
diff --git a/python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-Llama3-V-2_5/generate.py b/python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-Llama3-V-2_5/generate.py
new file mode 100644
index 00000000..e1bde9ee
--- /dev/null
+++ b/python/llm/example/GPU/HuggingFace/Multimodal/MiniCPM-Llama3-V-2_5/generate.py
@@ -0,0 +1,84 @@
+#
+# Copyright 2016 The BigDL Authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+import os
+import time
+import argparse
+import requests
+from PIL import Image
+from ipex_llm.transformers import AutoModel
+from transformers import AutoTokenizer
+
+
+if __name__ == '__main__':
+    parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for openbmb/MiniCPM-Llama3-V-2_5 model')
+    parser.add_argument('--repo-id-or-model-path', type=str, default="openbmb/MiniCPM-Llama3-V-2_5",
+                        help='The huggingface repo id for the openbmb/MiniCPM-Llama3-V-2_5 model to be downloaded'
+                             ', or the path to the huggingface checkpoint folder')
+    parser.add_argument('--image-url-or-path', type=str,
+                        default='http://farm6.staticflickr.com/5268/5602445367_3504763978_z.jpg',
+                        help='The URL or path to the image to infer')
+    parser.add_argument('--prompt', type=str, default="What is in the image?",
+                        help='Prompt to infer')
+    parser.add_argument('--n-predict', type=int, default=32,
+                        help='Max tokens to predict')
+
+    args = parser.parse_args()
+    model_path = args.repo_id_or_model_path
+    image_path = args.image_url_or_path
+    
+    # Load model in 4 bit,
+    # which convert the relevant layers in the model into INT4 format
+    # When running LLMs on Intel iGPUs for Windows users, we recommend setting `cpu_embedding=True` in the from_pretrained function.
+    # This will allow the memory-intensive embedding layer to utilize the CPU instead of iGPU.
+    model = AutoModel.from_pretrained(model_path, 
+                                      load_in_4bit=True,
+                                      optimize_model=False,
+                                      trust_remote_code=True,
+                                      modules_to_not_convert=["vpm", "resampler"],
+                                      use_cache=True)
+    model = model.float().to(device='xpu')
+    tokenizer = AutoTokenizer.from_pretrained(model_path,
+                                              trust_remote_code=True)
+    model.eval()
+
+    query = args.prompt
+    if os.path.exists(image_path):
+       image = Image.open(image_path).convert('RGB')
+    else:
+       image = Image.open(requests.get(image_path, stream=True).raw).convert('RGB')
+
+    # Generate predicted tokens
+    # here the prompt tuning refers to https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5/blob/main/README.md
+    msgs = [{'role': 'user', 'content': args.prompt}]
+    st = time.time()
+    res = model.chat(
+     image=image,
+     msgs=msgs,
+     context=None,
+     tokenizer=tokenizer,
+     sampling=False,
+     temperature=0.7
+    )
+    end = time.time()
+    print(f'Inference time: {end-st} s')
+    print('-'*20, 'Input', '-'*20)
+    print(image_path)
+    print('-'*20, 'Prompt', '-'*20)
+    print(args.prompt)
+    output_str = res
+    print('-'*20, 'Output', '-'*20)
+    print(output_str)