diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Advanced-Quantizations/GGUF/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Advanced-Quantizations/GGUF/README.md index d32f634d..5f160576 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Advanced-Quantizations/GGUF/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Advanced-Quantizations/GGUF/README.md @@ -3,7 +3,7 @@ In this directory, you will find examples on how to load GGUF model into `bigdl- >Note: Only LLaMA2 family models are currently supported ## Requirements -To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information. +To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../../../README.md#system-support) for more information. **Important: Please make sure you have installed `transformers==4.33.0` to run the example.** diff --git a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/README.md b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/README.md index 8bb8d592..4509079c 100644 --- a/python/llm/example/CPU/HF-Transformers-AutoModels/Model/README.md +++ b/python/llm/example/CPU/HF-Transformers-AutoModels/Model/README.md @@ -4,7 +4,7 @@ You can use BigDL-LLM to run any Huggingface Transformer models with INT4 optimi ## Recommended Requirements To run the examples, we recommend using Intel® Xeon® processors (server), or >= 12th Gen Intel® Core™ processor (client). -For OS, BigDL-LLM supports Ubuntu 20.04 or later, CentOS 7 or later, and Windows 10/11. +For OS, BigDL-LLM supports Ubuntu 20.04 or later (glibc>=2.17), CentOS 7 or later (glibc>=2.17), and Windows 10/11. ## Best Known Configuration on Linux For better performance, it is recommended to set environment variables on Linux with the help of BigDL-LLM: diff --git a/python/llm/example/CPU/README.md b/python/llm/example/CPU/README.md index 2a72e7db..a1ce8091 100644 --- a/python/llm/example/CPU/README.md +++ b/python/llm/example/CPU/README.md @@ -18,6 +18,13 @@ This folder contains examples of running BigDL-LLM on Intel CPU: - Intel® Xeon® processors **Operating System**: -- Ubuntu 20.04 or later -- CentOS 7 or later +- Ubuntu 20.04 or later (glibc>=2.17) +- CentOS 7 or later (glibc>=2.17) - Windows 10/11, with or without WSL + +## Best Known Configuration on Linux +For better performance, it is recommended to set environment variables on Linux with the help of BigDL-LLM: +```bash +pip install bigdl-llm +source bigdl-llm-init +``` diff --git a/python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations/GGUF/README.md b/python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations/GGUF/README.md index 9fe7b160..8d1d2782 100644 --- a/python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations/GGUF/README.md +++ b/python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations/GGUF/README.md @@ -3,7 +3,7 @@ In this directory, you will find examples on how to load GGUF model into `bigdl- >Note: Only LLaMA2 family models are currently supported ## Requirements -To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information. +To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../../../README.md#system-support) for more information. **Important: Please make sure you have installed `transformers==4.33.0` to run the example.** diff --git a/python/llm/example/GPU/README.md b/python/llm/example/GPU/README.md index e0b1d6f8..2a5d0dbc 100644 --- a/python/llm/example/GPU/README.md +++ b/python/llm/example/GPU/README.md @@ -26,3 +26,10 @@ Step 1, please refer to our [driver installation](https://dgpu-docs.intel.com/dr Step 2, you also need to download and install [Intel® oneAPI Base Toolkit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html). OneMKL and DPC++ compiler are needed, others are optional. > **Note**: IPEX 2.0.110+xpu requires Intel® oneAPI Base Toolkit's version >= 2023.2.0. + +## Best Known Configuration on Linux +For better performance, it is recommended to set environment variables on Linux: +```bash +export USE_XETLA=OFF +export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 +```