Update GGUF readme (#9611)
This commit is contained in:
parent
a7bc89b3a1
commit
51b668f229
5 changed files with 19 additions and 5 deletions
|
|
@ -3,7 +3,7 @@ In this directory, you will find examples on how to load GGUF model into `bigdl-
|
|||
>Note: Only LLaMA2 family models are currently supported
|
||||
|
||||
## Requirements
|
||||
To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
|
||||
To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../../../README.md#system-support) for more information.
|
||||
|
||||
**Important: Please make sure you have installed `transformers==4.33.0` to run the example.**
|
||||
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ You can use BigDL-LLM to run any Huggingface Transformer models with INT4 optimi
|
|||
## Recommended Requirements
|
||||
To run the examples, we recommend using Intel® Xeon® processors (server), or >= 12th Gen Intel® Core™ processor (client).
|
||||
|
||||
For OS, BigDL-LLM supports Ubuntu 20.04 or later, CentOS 7 or later, and Windows 10/11.
|
||||
For OS, BigDL-LLM supports Ubuntu 20.04 or later (glibc>=2.17), CentOS 7 or later (glibc>=2.17), and Windows 10/11.
|
||||
|
||||
## Best Known Configuration on Linux
|
||||
For better performance, it is recommended to set environment variables on Linux with the help of BigDL-LLM:
|
||||
|
|
|
|||
|
|
@ -18,6 +18,13 @@ This folder contains examples of running BigDL-LLM on Intel CPU:
|
|||
- Intel® Xeon® processors
|
||||
|
||||
**Operating System**:
|
||||
- Ubuntu 20.04 or later
|
||||
- CentOS 7 or later
|
||||
- Ubuntu 20.04 or later (glibc>=2.17)
|
||||
- CentOS 7 or later (glibc>=2.17)
|
||||
- Windows 10/11, with or without WSL
|
||||
|
||||
## Best Known Configuration on Linux
|
||||
For better performance, it is recommended to set environment variables on Linux with the help of BigDL-LLM:
|
||||
```bash
|
||||
pip install bigdl-llm
|
||||
source bigdl-llm-init
|
||||
```
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ In this directory, you will find examples on how to load GGUF model into `bigdl-
|
|||
>Note: Only LLaMA2 family models are currently supported
|
||||
|
||||
## Requirements
|
||||
To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
|
||||
To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../../../README.md#system-support) for more information.
|
||||
|
||||
**Important: Please make sure you have installed `transformers==4.33.0` to run the example.**
|
||||
|
||||
|
|
|
|||
|
|
@ -26,3 +26,10 @@ Step 1, please refer to our [driver installation](https://dgpu-docs.intel.com/dr
|
|||
|
||||
Step 2, you also need to download and install [Intel® oneAPI Base Toolkit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html). OneMKL and DPC++ compiler are needed, others are optional.
|
||||
> **Note**: IPEX 2.0.110+xpu requires Intel® oneAPI Base Toolkit's version >= 2023.2.0.
|
||||
|
||||
## Best Known Configuration on Linux
|
||||
For better performance, it is recommended to set environment variables on Linux:
|
||||
```bash
|
||||
export USE_XETLA=OFF
|
||||
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
|
||||
```
|
||||
|
|
|
|||
Loading…
Reference in a new issue