ipex-llm/python/llm/example/GPU/LangChain
Jin Qiao 9a96af4232
Remove oneAPI pip install command in related examples (#11030)
* Remove pip install command in windows installation guide

* fix chatglm3 installation guide

* Fix gemma cpu example

* Apply on other examples

* fix
2024-05-16 10:46:29 +08:00
..
chat.py fix prompt format for llama-2 in langchain (#10637) 2024-04-03 14:17:34 +08:00
low_bit.py Add tokenizer_id in Langchain (#10588) 2024-04-03 14:25:35 +08:00
rag.py Remove native_int4 in LangChain examples (#10510) 2024-03-27 17:48:16 +08:00
README.md Remove oneAPI pip install command in related examples (#11030) 2024-05-16 10:46:29 +08:00

Langchain examples

The examples in this folder shows how to use LangChain with ipex-llm on Intel GPU.

1. Install ipex-llm

Follow the instructions in GPU Install Guide to install ipex-llm

2. Install Required Dependencies for langchain examples.

pip install langchain==0.0.184
pip install -U chromadb==0.3.25
pip install -U pandas==2.0.3

3. Configures OneAPI environment variables for Linux

Note

Skip this step if you are running on Windows.

This is a required step on Linux for APT or offline installed oneAPI. Skip this step for PIP-installed oneAPI.

source /opt/intel/oneapi/setvars.sh

4. Runtime Configurations

For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device.

4.1 Configurations for Linux

For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series
export USE_XETLA=OFF
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
export SYCL_CACHE_PERSISTENT=1
For Intel Data Center GPU Max Series
export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
export SYCL_CACHE_PERSISTENT=1
export ENABLE_SDP_FUSION=1

Note: Please note that libtcmalloc.so can be installed by conda install -c conda-forge -y gperftools=2.10.

For Intel iGPU
export SYCL_CACHE_PERSISTENT=1
export BIGDL_LLM_XMX_DISABLED=1

4.2 Configurations for Windows

For Intel iGPU
set SYCL_CACHE_PERSISTENT=1
set BIGDL_LLM_XMX_DISABLED=1
For Intel Arc™ A-Series Graphics
set SYCL_CACHE_PERSISTENT=1

Note

For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.

5. Run the examples

5.1. Streaming Chat

python chat.py -m MODEL_PATH -q QUESTION

arguments info:

  • -m MODEL_PATH: required, path to the model
  • -q QUESTION: question to ask. Default is What is AI?.

5.2. RAG (Retrival Augmented Generation)

python rag.py -m <path_to_model> [-q QUESTION] [-i INPUT_PATH]

arguments info:

  • -m MODEL_PATH: required, path to the model.
  • -q QUESTION: question to ask. Default is What is IPEX?.
  • -i INPUT_PATH: path to the input doc.

5.2. Low Bit

The low_bit example (low_bit.py) showcases how to use use langchain with low_bit optimized model. By save_low_bit we save the weights of low_bit model into the target folder.

Note: save_low_bit only saves the weights of the model. Users could copy the tokenizer model into the target folder or specify tokenizer_id during initialization.

python low_bit.py -m <path_to_model> -t <path_to_target> [-q <your question>]

Runtime Arguments Explained:

  • -m MODEL_PATH: Required, the path to the model
  • -t TARGET_PATH: Required, the path to save the low_bit model
  • -q QUESTION: the question