ipex-llm/python/llm/example/GPU/HuggingFace/Multimodal
Yuwen Hu d11f257ee7
Add GPU example for MiniCPM-o-2_6 (#12735)
* Add init example for omni mode

* Small fix

* Small fix

* Add chat example

* Remove lagecy link

* Further update link

* Add readme

* Small fix

* Update main readme link

* Update based on comments

* Small fix

* Small fix

* Small fix
2025-01-23 16:10:19 +08:00
..
distil-whisper Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
glm-4v Polish Readme for ModelScope-related examples (#12603) 2024-12-26 10:52:47 +08:00
glm-edge-v Add GLM4-Edge-V GPU example (#12596) 2024-12-27 09:40:29 +08:00
internvl2 Add --modelscope option for glm-v4 MiniCPM-V-2_6 glm-edge and internvl2 (#12583) 2024-12-20 13:54:17 +08:00
MiniCPM-Llama3-V-2_5 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
MiniCPM-o-2_6 Add GPU example for MiniCPM-o-2_6 (#12735) 2025-01-23 16:10:19 +08:00
MiniCPM-V Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
MiniCPM-V-2 Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
MiniCPM-V-2_6 Polish Readme for ModelScope-related examples (#12603) 2024-12-26 10:52:47 +08:00
phi-3-vision Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
qwen-vl Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
qwen2-audio Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
qwen2-vl Add Qwen2-VL HF GPU example with ModelScope Support (#12606) 2025-01-13 15:42:04 +08:00
StableDiffusion Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
voiceassistant Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
whisper Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
README.md Update GPU HF-Transformers example structure (#11526) 2024-07-08 17:58:06 +08:00

Running HuggingFace multimodal model using IPEX-LLM on Intel GPU

This folder contains examples of running multimodal models model on IPEX-LLM. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.