ipex-llm/python/llm/example/GPU
2025-02-06 11:18:28 +08:00
..
Applications Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
Deepspeed-AutoTP feat: change oneccl to internal (#12296) 2024-10-31 09:51:43 +08:00
Deepspeed-AutoTP-FastAPI enable inference mode for deepspeed tp serving (#11742) 2024-08-08 14:38:30 +08:00
GraphMode [REFINE] graphmode code (#12540) 2024-12-16 09:17:01 +08:00
HuggingFace Small fix to MiniCPM-o-2_6 GPU example (#12766) 2025-02-05 11:32:26 +08:00
LangChain Small update to LangChain examples readme (#12452) 2024-11-27 14:02:25 +08:00
Lightweight-Serving Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
LlamaIndex Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
LLM-Finetuning fix qlora finetune example (#12769) 2025-02-06 11:18:28 +08:00
Long-Context Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
Lookahead/llama2 Add lookahead GPU example (#10785) 2024-04-17 17:41:55 +08:00
ModelScope-Models Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
Pipeline-Parallel-Inference Support pipeline parallel for glm-4v (#11545) 2024-07-11 16:06:06 +08:00
Pipeline-Parallel-Serving Add lightweight serving and support tgi parameter (#11600) 2024-07-19 13:15:56 +08:00
PyTorch-Models Remove env variable BIGDL_LLM_XMX_DISABLED in documentation (#12445) 2024-11-27 11:16:36 +08:00
Speculative-Decoding Update Eagle example to Eagle2+ipex-llm integration (#11717) 2024-10-16 23:16:14 -07:00
vLLM-Serving Upgrade to vllm 0.6.2 (#12338) 2024-11-12 20:35:34 +08:00
README.md Add GPU example for MiniCPM-o-2_6 (#12735) 2025-01-23 16:10:19 +08:00

IPEX-LLM Examples on Intel GPU

This folder contains examples of running IPEX-LLM on Intel GPU:

  • Applications: running LLM applications (such as autogen) on IPEX-LLM
  • HuggingFace: running HuggingFace models on IPEX-LLM (using the standard AutoModel APIs), including language models and multimodal models.
  • LLM-Finetuning: running finetuning (such as LoRA, QLoRA, QA-LoRA, etc) using IPEX-LLM on Intel GPUs
  • vLLM-Serving: running vLLM serving framework on intel GPUs (with IPEX-LLM low-bit optimized models)
  • Deepspeed-AutoTP: running distributed inference using DeepSpeed AutoTP (with IPEX-LLM low-bit optimized models) on Intel GPUs
  • Deepspeed-AutoTP-FastAPI: running distributed inference using DeepSpeed AutoTP and start serving with FastAPI(with IPEX-LLM low-bit optimized models) on Intel GPUs
  • Pipeline-Parallel-Inference: running IPEX-LLM optimized low-bit model vertically partitioned on multiple Intel GPUs
  • Pipeline-Parallel-Serving: running IPEX-LLM serving with FastAPI on multiple Intel GPUs in pipeline parallel fasion
  • Lightweight-Serving: running IPEX-LLM serving with FastAPI on one Intel GPU In a lightweight way
  • LangChain: running LangChain applications on IPEX-LLM
  • PyTorch-Models: running any PyTorch model on IPEX-LLM (with "one-line code change")
  • Speculative-Decoding: running any Hugging Face Transformers model with self-speculative decoding on Intel GPUs
  • ModelScope-Models: running ModelScope model with IPEX-LLM on Intel GPUs
  • Long-Context: running long-context generation with IPEX-LLM on Intel Arc™ A770 Graphics.

System Support

1. Linux:

Hardware:

  • Intel Arc™ A-Series Graphics
  • Intel Data Center GPU Flex Series
  • Intel Data Center GPU Max Series

Operating System:

  • Ubuntu 20.04 or later (Ubuntu 22.04 is preferred)

2. Windows

Hardware:

  • Intel iGPU and dGPU

Operating System:

  • Windows 10/11, with or without WSL

Requirements

To apply Intel GPU acceleration, therere several steps for tools installation and environment preparation. See the GPU installation guide on Linux or Windows for mode details.