ipex-llm/python/llm/example/GPU/README.md
Wang, Jian4 1eed0635f2
Add lightweight serving and support tgi parameter (#11600)
* init tgi request

* update openai api

* update for pp

* update and add readme

* add to docker

* add start bash

* update

* update

* update
2024-07-19 13:15:56 +08:00

39 lines
No EOL
2.5 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# IPEX-LLM Examples on Intel GPU
This folder contains examples of running IPEX-LLM on Intel GPU:
- [Applications](Applications): running LLM applications (such as autogen) on IPEX-LLM
- [HuggingFace](HuggingFace): running ***HuggingFace*** models on IPEX-LLM (using the standard AutoModel APIs), including language models and multimodal models.
- [LLM-Finetuning](LLM-Finetuning): running ***finetuning*** (such as LoRA, QLoRA, QA-LoRA, etc) using IPEX-LLM on Intel GPUs
- [vLLM-Serving](vLLM-Serving): running ***vLLM*** serving framework on intel GPUs (with IPEX-LLM low-bit optimized models)
- [Deepspeed-AutoTP](Deepspeed-AutoTP): running distributed inference using ***DeepSpeed AutoTP*** (with IPEX-LLM low-bit optimized models) on Intel GPUs
- [Deepspeed-AutoTP-FastAPI](Deepspeed-AutoTP-FastAPI): running distributed inference using ***DeepSpeed AutoTP*** and start serving with ***FastAPI***(with IPEX-LLM low-bit optimized models) on Intel GPUs
- [Pipeline-Parallel-Inference](Pipeline-Parallel-Inference): running IPEX-LLM optimized low-bit model vertically partitioned on multiple Intel GPUs
- [Pipeline-Parallel-Serving](Pipeline-Parallel-Serving): running IPEX-LLM serving with **FastAPI** on multiple Intel GPUs in pipeline parallel fasion
- [Lightweight-Serving](Lightweight-Serving): running IPEX-LLM serving with **FastAPI** on one Intel GPU In a lightweight way
- [LangChain](LangChain): running ***LangChain*** applications on IPEX-LLM
- [PyTorch-Models](PyTorch-Models): running any PyTorch model on IPEX-LLM (with "one-line code change")
- [Speculative-Decoding](Speculative-Decoding): running any ***Hugging Face Transformers*** model with ***self-speculative decoding*** on Intel GPUs
- [ModelScope-Models](ModelScope-Models): running ***ModelScope*** model with IPEX-LLM on Intel GPUs
- [Long-Context](Long-Context): running **long-context** generation with IPEX-LLM on Intel Arc™ A770 Graphics.
## System Support
### 1. Linux:
**Hardware**:
- Intel Arc™ A-Series Graphics
- Intel Data Center GPU Flex Series
- Intel Data Center GPU Max Series
**Operating System**:
- Ubuntu 20.04 or later (Ubuntu 22.04 is preferred)
### 2. Windows
**Hardware**:
- Intel iGPU and dGPU
**Operating System**:
- Windows 10/11, with or without WSL
## Requirements
To apply Intel GPU acceleration, therere several steps for tools installation and environment preparation. See the [GPU installation guide](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html) for mode details.