ipex-llm/python/llm/example/GPU/Pipeline-Parallel-FastAPI
Qiyuan Gong ce3f08b25a
Fix IPEX auto importer (#11192)
* Fix ipex auto importer with Python builtins.
* Raise errors if the user imports ipex manually before importing ipex_llm. Do nothing if they import ipex after importing ipex_llm.
* Remove import ipex in examples.
2024-06-04 16:57:18 +08:00
..
llama_models.py LLM: Add Pipeline-Parallel-FastAPI example (#10917) 2024-05-27 14:46:29 +08:00
pipeline_models.py Fix IPEX auto importer (#11192) 2024-06-04 16:57:18 +08:00
pipeline_serving.py Fix IPEX auto importer (#11192) 2024-06-04 16:57:18 +08:00
README.md Refine Pipeline Parallel FastAPI example (#11168) 2024-05-29 17:16:50 +08:00
run.sh Refine Pipeline Parallel FastAPI example (#11168) 2024-05-29 17:16:50 +08:00
wrk_script_1024.lua Refine Pipeline-Parallel-Fastapi example README (#11155) 2024-05-28 15:18:21 +08:00

Serve IPEX-LLM on Multiple Intel GPUs in multi-stage pipeline parallel fashion

This example demonstrates how to run IPEX-LLM serving on multiple Intel GPUs with Pipeline Parallel.

Requirements

To run this example with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to here for more information. For this particular example, you will need at least two GPUs on your machine.

Example

1. Install

conda create -n llm python=3.11
conda activate llm
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
pip install oneccl_bind_pt==2.1.100 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
# configures OneAPI environment variables
source /opt/intel/oneapi/setvars.sh
pip install mpi4py fastapi uvicorn
conda install -c conda-forge -y gperftools=2.10 # to enable tcmalloc

pip install transformers==4.31.0 # for llama2 models

2. Run pipeline parallel serving on multiple GPUs

# Need to set MODEL_PATH in run.sh first
bash run.sh

3. Sample Input and Output

We can use curl to test serving api

generate()

# Set http_proxy and https_proxy to null to ensure that requests are not forwarded by a proxy.
export http_proxy=
export https_proxy=

curl -X 'POST' \
  'http://127.0.0.1:8000/generate/' \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
  "prompt": "What is AI?",
  "n_predict": 32
}'

4. Benchmark with wrk

We use wrk for testing end-to-end throughput, check here.

You can install by:

sudo apt install wrk

Please change the test url accordingly.

# set t/c to the number of concurrencies to test full throughput.
wrk -t1 -c1 -d5m -s ./wrk_script_1024.lua http://127.0.0.1:8000/generate/ --timeout 1m