LLM: Modify CPU Installation Command for documentation (#11042)

* init

* refine

* refine

* refine

* refine comments
This commit is contained in:
Xiangyu Tian 2024-05-17 10:14:00 +08:00 committed by GitHub
parent fff067d240
commit d963e95363
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
2 changed files with 46 additions and 8 deletions

View file

@ -4,8 +4,20 @@
Install IPEX-LLM for CPU supports using pip through:
```bash
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
```eval_rst
.. tabs::
.. tab:: Linux
.. code-block:: bash
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
.. tab:: Windows
.. code-block:: cmd
pip install --pre --upgrade ipex-llm[all]
```
Please refer to [Environment Setup](#environment-setup) for more information.
@ -41,11 +53,26 @@ For optimal performance with LLM models using IPEX-LLM optimizations on Intel CP
First we recommend using [Conda](https://docs.conda.io/en/latest/miniconda.html) to create a python 3.11 enviroment:
```bash
```eval_rst
.. tabs::
.. tab:: Linux
.. code-block:: bash
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
.. tab:: Windows
.. code-block:: cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
```
Then for running a LLM model with IPEX-LLM optimizations (taking an `example.py` an example):

View file

@ -8,14 +8,25 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a Baichuan model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11
conda activate llm
pip install ipex-llm[all] # install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu # install ipex-llm with 'all' option
pip install transformers_stream_generator # additional package required for Baichuan-13B-Chat to conduct generation
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers_stream_generator
```
### 2. Run
```
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
@ -32,7 +43,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```