ipex-llm/python/llm/example/GPU
Yuwen Hu 0801d27a6f
Remove PyTorch 2.3 support for Intel GPU (#13097)
* Remove PyTorch 2.3 installation option for GPU

* Remove xpu_lnl option in installation guides for docs

* Update BMG quickstart

* Remove PyTorch 2.3 dependencies for GPU examples

* Update the graphmode example to use stable version 2.2.0

* Fix based on comments
2025-04-22 10:26:16 +08:00
..
Applications Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
DeepSeek-R1 R1 Hybrid: Add Benchmark for DeepSeek R1 transformers example (#12854) 2025-02-19 18:33:21 +08:00
Deepspeed-AutoTP feat: change oneccl to internal (#12296) 2024-10-31 09:51:43 +08:00
Deepspeed-AutoTP-FastAPI enable inference mode for deepspeed tp serving (#11742) 2024-08-08 14:38:30 +08:00
GraphMode Remove PyTorch 2.3 support for Intel GPU (#13097) 2025-04-22 10:26:16 +08:00
HuggingFace Remove PyTorch 2.3 support for Intel GPU (#13097) 2025-04-22 10:26:16 +08:00
LangChain Small update to LangChain examples readme (#12452) 2024-11-27 14:02:25 +08:00
Lightweight-Serving Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
LlamaIndex Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
LLM-Finetuning Gemma QLoRA example (#12969) 2025-03-14 14:27:51 +08:00
Long-Context Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
Lookahead/llama2 Add lookahead GPU example (#10785) 2024-04-17 17:41:55 +08:00
ModelScope-Models Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
Pipeline-Parallel-Inference Support pipeline parallel for glm-4v (#11545) 2024-07-11 16:06:06 +08:00
Pipeline-Parallel-Serving Add lightweight serving and support tgi parameter (#11600) 2024-07-19 13:15:56 +08:00
PyTorch-Models Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
Speculative-Decoding remove fschat in EAGLE example (#13005) 2025-03-25 15:48:48 +08:00
vLLM-Serving Update README.md (#12877) 2025-02-24 09:59:17 +08:00
README.md Add GPU example for MiniCPM-o-2_6 (#12735) 2025-01-23 16:10:19 +08:00

IPEX-LLM Examples on Intel GPU

This folder contains examples of running IPEX-LLM on Intel GPU:

  • Applications: running LLM applications (such as autogen) on IPEX-LLM
  • HuggingFace: running HuggingFace models on IPEX-LLM (using the standard AutoModel APIs), including language models and multimodal models.
  • LLM-Finetuning: running finetuning (such as LoRA, QLoRA, QA-LoRA, etc) using IPEX-LLM on Intel GPUs
  • vLLM-Serving: running vLLM serving framework on intel GPUs (with IPEX-LLM low-bit optimized models)
  • Deepspeed-AutoTP: running distributed inference using DeepSpeed AutoTP (with IPEX-LLM low-bit optimized models) on Intel GPUs
  • Deepspeed-AutoTP-FastAPI: running distributed inference using DeepSpeed AutoTP and start serving with FastAPI(with IPEX-LLM low-bit optimized models) on Intel GPUs
  • Pipeline-Parallel-Inference: running IPEX-LLM optimized low-bit model vertically partitioned on multiple Intel GPUs
  • Pipeline-Parallel-Serving: running IPEX-LLM serving with FastAPI on multiple Intel GPUs in pipeline parallel fasion
  • Lightweight-Serving: running IPEX-LLM serving with FastAPI on one Intel GPU In a lightweight way
  • LangChain: running LangChain applications on IPEX-LLM
  • PyTorch-Models: running any PyTorch model on IPEX-LLM (with "one-line code change")
  • Speculative-Decoding: running any Hugging Face Transformers model with self-speculative decoding on Intel GPUs
  • ModelScope-Models: running ModelScope model with IPEX-LLM on Intel GPUs
  • Long-Context: running long-context generation with IPEX-LLM on Intel Arc™ A770 Graphics.

System Support

1. Linux:

Hardware:

  • Intel Arc™ A-Series Graphics
  • Intel Data Center GPU Flex Series
  • Intel Data Center GPU Max Series

Operating System:

  • Ubuntu 20.04 or later (Ubuntu 22.04 is preferred)

2. Windows

Hardware:

  • Intel iGPU and dGPU

Operating System:

  • Windows 10/11, with or without WSL

Requirements

To apply Intel GPU acceleration, therere several steps for tools installation and environment preparation. See the GPU installation guide on Linux or Windows for mode details.