LLM: Modify CPU Installation Command for most examples (#11049)

* init

* refine

* refine

* refine

* modify hf-agent example

* modify all CPU model example

* remove readthedoc modify

* replace powershell with cmd

* fix repo

* fix repo

* update

* remove comment on windows code block

* update

* update

* update

* update

---------

Co-authored-by: xiangyuT <xiangyu.tian@intel.com>
This commit is contained in:
ZehuaCao 2024-05-17 15:52:20 +08:00 committed by GitHub
parent f1156e6b20
commit 56cb992497
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
109 changed files with 1621 additions and 225 deletions

View file

@ -110,7 +110,7 @@ See the demo of running [*Text-Generation-WebUI*](https://ipex-llm.readthedocs.i
- LLM finetuning on Intel [GPU](python/llm/example/GPU/LLM-Finetuning), including [LoRA](python/llm/example/GPU/LLM-Finetuning/LoRA), [QLoRA](python/llm/example/GPU/LLM-Finetuning/QLoRA), [DPO](python/llm/example/GPU/LLM-Finetuning/DPO), [QA-LoRA](python/llm/example/GPU/LLM-Finetuning/QA-LoRA) and [ReLoRA](python/llm/example/GPU/LLM-Finetuning/ReLora)
- QLoRA finetuning on Intel [CPU](python/llm/example/CPU/QLoRA-FineTuning)
- Integration with community libraries
- [HuggingFace tansformers](python/llm/example/GPU/HF-Transformers-AutoModels)
- [HuggingFace transformers](python/llm/example/GPU/HF-Transformers-AutoModels)
- [Standard PyTorch model](python/llm/example/GPU/PyTorch-Models)
- [DeepSpeed-AutoTP](python/llm/example/GPU/Deepspeed-AutoTP)
- [HuggingFace PEFT](python/llm/example/GPU/LLM-Finetuning/HF-PEFT)

View file

@ -97,4 +97,4 @@ Then for running a LLM model with IPEX-LLM optimizations (taking an `example.py`
# e.g. for a server with 48 cores per socket
export OMP_NUM_THREADS=48
numactl -C 0-47 -m 0 python example.py
```
```

View file

@ -162,7 +162,7 @@ Code Examples
* Integration with community libraries
* `HuggingFace tansformers <https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/HF-Transformers-AutoModels>`_
* `HuggingFace transformers <https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/HF-Transformers-AutoModels>`_
* `Standard PyTorch model <https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/PyTorch-Models>`_
* `DeepSpeed-AutoTP <https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/Deepspeed-AutoTP>`_
* `HuggingFace PEFT <https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/LLM-Finetuning/HF-PEFT>`_

View file

@ -9,14 +9,26 @@ To run this example with IPEX-LLM, we have some recommended requirements for you
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11
conda activate llm
pip install ipex-llm[all] # install ipex-llm with 'all' option
# install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install pillow # additional package required for opening images
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install pillow
```
### 2. Run
```
python ./run_agent.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --image-path IMAGE_PATH
@ -32,7 +44,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./run_agent.py --image-path IMAGE_PATH
```

View file

@ -9,10 +9,20 @@ model = AutoModelForCausalLM.from_pretrained(model_name_or_path, load_in_4bit=Tr
## Prepare Environment
We suggest using conda to manage environment:
On Linux
```bash
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
```

View file

@ -20,4 +20,4 @@ pip install deepspeed==0.11.1
# 4. exclude intel deepspeed extension, which is only for XPU
pip uninstall intel-extension-for-deepspeed
# 5. install ipex-llm
pip install --pre --upgrade ipex-llm[all]
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu

View file

@ -33,16 +33,31 @@ In the example [generate.py](./generate.py), we show a basic use case for a AWQ
We suggest using conda to manage environment:
On Linux
```bash
conda create -n llm python=3.11
conda activate llm
pip install autoawq==0.1.8 --no-deps
pip install --pre --upgrade ipex-llm[all] # install ipex-llm with 'all' option
# install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install transformers==4.35.0
pip install accelerate==0.25.0
pip install einops
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install autoawq==0.1.8 --no-deps
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.35.0
pip install accelerate==0.25.0
pip install einops
```
**Note: For Mixtral model, please use transformers 4.36.0:**
```bash
pip install transformers==4.36.0
@ -68,7 +83,7 @@ Arguments info:
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -24,19 +24,32 @@ In the example [generate.py](./generate.py), we show a basic use case to load a
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install transformers==4.36.0 # upgrade transformers
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.36.0
```
### 2. Run
After setting up the Python environment, you could run the example by following steps.
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --model <path_to_gguf_model> --prompt 'What is AI?'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -8,16 +8,31 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a Llama2 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux
```bash
conda create -n llm python=3.11
conda activate llm
pip install ipex-llm[all] # install ipex-llm with 'all' option
# install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install transformers==4.34.0
BUILD_CUDA_EXT=0 pip install git+https://github.com/PanQiWei/AutoGPTQ.git@1de9ab6
pip install optimum==0.14.0
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.34.0
set BUILD_CUDA_EXT=0
pip install git+https://github.com/PanQiWei/AutoGPTQ.git@1de9ab6
pip install optimum==0.14.0
```
### 2. Run
```
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
@ -34,7 +49,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -9,6 +9,6 @@ For OS, IPEX-LLM supports Ubuntu 20.04 or later (glibc>=2.17), CentOS 7 or later
## Best Known Configuration on Linux
For better performance, it is recommended to set environment variables on Linux with the help of IPEX-LLM:
```bash
pip install ipex-llm
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
source ipex-llm-init
```

View file

@ -15,11 +15,24 @@ In the example [generate.py](./generate.py), we show a basic use case for a Aqui
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -31,7 +44,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'AI是什么'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -15,11 +15,24 @@ In the example [generate.py](./generate.py), we show a basic use case for a Aqui
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -31,7 +44,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'AI是什么'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -9,12 +9,14 @@ In the example [generate.py](./generate.py), we show a basic use case for a Baic
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu # install ipex-llm with 'all' option
# install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install transformers_stream_generator # additional package required for Baichuan-13B-Chat to conduct generation
```

View file

@ -8,14 +8,28 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a Baichuan model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11
conda activate llm
pip install ipex-llm[all] # install ipex-llm with 'all' option
# install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install transformers_stream_generator # additional package required for Baichuan-13B-Chat to conduct generation
```
On Windows:
```cmd
onda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers_stream_generator
```
### 2. Run
```
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
@ -32,7 +46,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -8,11 +8,24 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a BlueLM model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -31,7 +44,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -15,14 +15,28 @@ In the example [generate.py](./generate.py), we show a basic use case for a Chat
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install "transformers<4.34.1" # chatglm cannot work with transformers 4.34.1+
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install "transformers<4.34.1"
```
### 2. Run
After setting up the Python environment, you could run the example by following steps.
@ -32,7 +46,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'AI是什么'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -9,11 +9,24 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a ChatGLM2 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install ipex-llm[all] # install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -32,7 +45,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```
@ -79,11 +92,24 @@ Inference time: xxxx s
In the example [streamchat.py](./streamchat.py), we show a basic use case for a ChatGLM2 model to stream chat, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install ipex-llm[all] # install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -108,7 +134,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
$env:PYTHONUNBUFFERED=1 # ensure stdout and stderr streams are sent straight to terminal without being first buffered
python ./streamchat.py
```

View file

@ -9,11 +9,24 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a ChatGLM3 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -32,7 +45,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```
@ -80,11 +93,24 @@ AI stands for Artificial Intelligence. It refers to the development of computer
In the example [streamchat.py](./streamchat.py), we show a basic use case for a ChatGLM3 model to stream chat, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -109,7 +135,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
$env:PYTHONUNBUFFERED=1 # ensure stdout and stderr streams are sent straight to terminal without being first buffered
python ./streamchat.py
```

View file

@ -10,17 +10,31 @@ In the example [generate.py](./generate.py), we show a basic use case for a Code
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# install ipex-llm with 'all' option
pip install ipex-llm[all]
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
# According to CodeGemma's requirement, please make sure you are using a stable version of Transformers, 4.38.1 or newer.
pip install transformers==4.38.1
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.38.1
```
### 2. Run
```
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
@ -37,7 +51,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -8,14 +8,28 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a CodeLlama model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11
conda activate llm
pip install ipex-llm[all] # install ipex-llm with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install transformers==4.34.1 # CodeLlamaTokenizer is supported in higher version of transformers
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.34.1
```
### 2. Run
```
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
@ -32,7 +46,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -15,11 +15,24 @@ In the example [generate.py](./generate.py), we show a basic use case for a Code
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -31,7 +44,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'def print_hello_world():'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -8,12 +8,26 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a cohere model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install ipex-llm with 'all' option
pip install tansformers==4.40.0
# install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install transformers==4.40.0
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.40.0
```
### 2. Run
@ -32,7 +46,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -8,14 +8,28 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a DeciLM-7B model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install transformers==4.35.2 # required by DeciLM-7B
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.35.2
```
### 2. Run
```
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
@ -32,7 +46,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -15,14 +15,28 @@ In the example [generate.py](./generate.py), we show a basic use case for a Deep
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install einops # additional package required for DeepSeek-MoE to conduct generation
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install einops
```
### 2. Run
After setting up the Python environment, you could run the example by following steps.
@ -35,7 +49,7 @@ You need to disable flash attention to run this model. To do this, simply replac
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'What is AI?'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -8,11 +8,24 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a Deepseek model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install ipex-llm[all] # install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -31,7 +44,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -11,14 +11,28 @@ In the example [recognize.py](./recognize.py), we show a basic use case for a Di
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install datasets soundfile librosa # required by audio processing
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install datasets soundfile librosa
```
### 2. Run
```
python ./recognize.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --repo-id-or-data-path REPO_ID_OR_DATA_PATH --language LANGUAGE --chunk-length CHUNK_LENGTH --batch-size BATCH_SIZE
@ -38,7 +52,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./recognize.py
```

View file

@ -8,11 +8,24 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a Dolly v1 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install ipex-llm[all] # install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -31,7 +44,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -8,13 +8,25 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a Dolly v2 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install ipex-llm[all] # install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
```
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
@ -31,7 +43,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -9,14 +9,28 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a Falcon model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11
conda activate llm
pip install ipex-llm[all] # install ipex-llm with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install einops # additional package required for falcon-7b-instruct and falcon-40b-instruct to conduct generation
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install einops
```
### 2. (Optional) Download Model and Replace File
If you select the Falcon models ([tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) or [tiiuae/falcon-40b-instruct](https://huggingface.co/tiiuae/falcon-40b-instruct)), please note that their code (`modelling_RW.py`) does not support KV cache at the moment. To address issue, we have provided two updated files ([falcon-7b-instruct/modelling_RW.py](./falcon-7b-instruct/modelling_RW.py) and [falcon-40b-instruct/modelling_RW.py](./falcon-40b-instruct/modelling_RW.py)), which can be used to achieve the best performance using IPEX-LLM INT4 optimizations with KV cache support.
After transformers 4.36, only transformer models are supported since remote code diverges from transformer model code, make sure set `trust_remote_code=False`.
@ -66,7 +80,7 @@ Arguments info:
#### 3.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -11,11 +11,24 @@ In the example [generate.py](./generate.py), we show a basic use case for a Flan
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -27,7 +40,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'Translate to German: My name is Arthur'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -10,15 +10,29 @@ In the example [generate.py](./generate.py), we show a basic use case for an Fuy
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install transformers==4.35 pillow # additional package required for Fuyu to conduct generation
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.35 pillow
```
### 2. Run
After setting up the Python environment, you could run the example by following steps.
@ -28,7 +42,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --image-path demo.jpg
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -11,16 +11,31 @@ In the example [generate.py](./generate.py), we show a basic use case for a Gemm
We suggest using conda to manage the Python environment:
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
# According to Gemma's requirement, please make sure you are using a stable version of Transformers, 4.38.1 or newer.
pip install transformers==4.38.1
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.38.1
```
### 2. Run
```
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
@ -37,7 +52,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -10,16 +10,32 @@ In the example [chat.py](./chat.py), we show a basic use case for an InternLM_XC
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install accelerate timm==0.4.12 sentencepiece==0.1.99 gradio==3.44.4 markdown2==2.4.10 xlsxwriter==3.1.2 einops # additional package required for InternLM_XComposer to conduct generation
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install accelerate timm==0.4.12 sentencepiece==0.1.99 gradio==3.44.4 markdown2==2.4.10 xlsxwriter==3.1.2 einops
```
### 2. Download Model and Replace File
If you select the InternLM_XComposer model ([internlm/internlm-xcomposer-vl-7b](https://huggingface.co/internlm/internlm-xcomposer-vl-7b)), please note that their code (`modeling_InternLM_XComposer.py`) does not support inference on CPU. To address this issue, we have provided the updated file ([internlm-xcomposer-vl-7b/modeling_InternLM_XComposer.py](./internlm-xcomposer-vl-7b/modeling_InternLM_XComposer.py), which can be used to conduct inference on CPU.
@ -49,7 +65,7 @@ After setting up the Python environment, you could run the example by following
#### 3.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./chat.py --image-path demo.jpg
```
More information about arguments can be found in [Arguments Info](#33-arguments-info) section. The expected output can be found in [Sample Output](#34-sample-output) section.

View file

@ -9,11 +9,24 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a InternLM model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install ipex-llm[all] # install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -32,7 +45,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -9,11 +9,24 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a InternLM2 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -32,7 +45,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -8,13 +8,25 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a Llama2 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install ipex-llm[all] # install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
```
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
@ -31,7 +43,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -8,13 +8,27 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a Llama3 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install ipex-llm with 'all' option
# install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
# transformers>=4.33.0 is required for Llama3 with IPEX-LLM optimizations
pip install transformers==4.37.0
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.37.0
```
@ -34,7 +48,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -12,16 +12,31 @@ In the example [generate.py](./generate.py), we show a basic use case for a Mist
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
# Refer to https://huggingface.co/mistralai/Mistral-7B-v0.1#troubleshooting, please make sure you are using a stable version of Transformers, 4.34.0 or newer.
pip install transformers==4.34.0
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.34.0
```
### 2. Run
After setting up the Python environment, you could run the example by following steps.
@ -31,7 +46,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'What is AI?'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -12,18 +12,30 @@ In the example [generate.py](./generate.py), we show a basic use case for a Mixt
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# below command will install PyTorch CPU as default
pip install torch==2.0.1 --index-url https://download.pytorch.org/whl/cpu
pip install --pre --upgrade ipex-llm[all]
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
# Please make sure you are using a stable version of Transformers, 4.36.0 or newer.
pip install transformers==4.36.0
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.36.0
```
### 2. Run
```bash

View file

@ -9,11 +9,24 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a MOSS model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11
conda activate llm
pip install ipex-llm[all] # install ipex-llm with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install "transformers<4.34"
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install "transformers<4.34"
```
@ -33,7 +46,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -8,14 +8,27 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for an MPT model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11
conda activate llm
pip install ipex-llm[all] # install ipex-llm with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install einops # additional package required for mpt-7b-chat and mpt-30b-chat to conduct generation
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install einops
```
### 2. Run
```
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
@ -32,7 +45,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -15,14 +15,28 @@ In the example [generate.py](./generate.py), we show a basic use case for a phi-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install einops # additional package required for phi-1_5 to conduct generation
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install einops
```
### 2. Run
After setting up the Python environment, you could run the example by following steps.
@ -32,7 +46,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'What is AI?'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -15,13 +15,26 @@ In the example [generate.py](./generate.py), we show a basic use case for a phi-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install einops # additional package required for phi-2 to conduct generation
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install einops
```
### 2. Run
After setting up the Python environment, you could run the example by following steps.
@ -32,7 +45,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'What is AI?'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -15,11 +15,25 @@ In the example [generate.py](./generate.py), we show a basic use case for a phi-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install transformers==4.37.0
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.37.0
```
@ -33,7 +47,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'What is AI?'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -15,14 +15,28 @@ In the example [generate.py](./generate.py), we show a basic use case for a phix
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install einops # additional package required for phi to conduct generation
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install einops
```
### 2. Run
After setting up the Python environment, you could run the example by following steps.
@ -32,7 +46,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'What is AI?'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -9,11 +9,24 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a Phoenix model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install ipex-llm[all] # install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -32,7 +45,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -10,22 +10,38 @@ In the example [chat.py](./chat.py), we show a basic use case for a Qwen-VL mode
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install accelerate tiktoken einops transformers_stream_generator==0.0.4 scipy torchvision pillow tensorboard matplotlib # additional package required for Qwen-VL-Chat to conduct generation
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install accelerate tiktoken einops transformers_stream_generator==0.0.4 scipy torchvision pillow tensorboard matplotlib
```
### 2. Run
After setting up the Python environment, you could run the example by following steps.
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./chat.py
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -14,14 +14,27 @@ In the example [generate.py](./generate.py), we show a basic use case for a Qwen
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11
conda activate llm
pip install ipex-llm[all] # install ipex-llm with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install tiktoken einops transformers_stream_generator # additional package required for Qwen-7B-Chat to conduct generation
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install tiktoken einops transformers_stream_generator
```
### 2. Run
The minimum Qwen model version currently supported by IPEX-LLM is the version on November 30, 2023.
@ -44,7 +57,7 @@ Arguments info:
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -9,11 +9,15 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a Qwen model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install ipex-llm with 'all' option
# install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install transformers==4.37.0 # install the transformers which support Qwen2
# only for Qwen1.5-MoE-A2.7B
@ -21,6 +25,20 @@ pip install transformers==4.40.0
pip install trl==0.8.1
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.37.0
REM For Qwen1.5-MoE-A2.7B
pip install transformers==4.40.0
pip install trl==0.8.1
```
### 2. Run
```
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
@ -37,7 +55,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -9,11 +9,24 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a RedPajama model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install ipex-llm[all] # install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -32,7 +45,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -10,11 +10,25 @@ In the example [generate.py](./generate.py), we show a basic use case for an Rep
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install "transformers<4.35"
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install "transformers<4.35"
```
@ -23,7 +37,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'def print_hello_world():'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -8,13 +8,25 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a Skywork model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
```
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
@ -31,7 +43,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -8,14 +8,28 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a SOLAR model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install transformers==4.35.2 # required by SOLAR
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.35.2
```
### 2. Run
```
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
@ -32,7 +46,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -10,16 +10,31 @@ In the example [generate.py](./generate.py), we show a basic use case for a Stab
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
# Refer to https://huggingface.co/stabilityai/stablelm-zephyr-3b/blob/8b471c751c0e78cb46cf9f47738dd0eb45392071/config.json#L21, please make sure you are using a stable version of Transformers, 4.38.0 or newer.
pip install transformers==4.38.0
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.38.0
```
### 2. Run
After setting up the Python environment, you could run the example by following steps.
@ -29,7 +44,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'What is AI?'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -8,11 +8,24 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for an StarCoder model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install ipex-llm[all] # install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -31,7 +44,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -8,11 +8,24 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a Vicuna model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install ipex-llm[all] # install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -31,7 +44,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -9,14 +9,28 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [recognize.py](./recognize.py), we show a basic use case for a Whisper model to conduct transcription using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11
conda activate llm
pip install ipex-llm[all] # install ipex-llm with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install datasets soundfile librosa # required by audio processing
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install datasets soundfile librosa
```
### 2. Run
```
python ./recognize.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --repo-id-or-data-path REPO_ID_OR_DATA_PATH --language LANGUAGE
@ -34,7 +48,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./recognize.py
```
@ -65,11 +79,25 @@ Inference time: xxxx s
In the example [long-segment-recognize.py](./long-segment-recognize.py), we show a basic use case for a Whisper model to conduct transcription using `pipeline()` API for long audio input, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11
conda activate llm
pip install ipex-llm[all] # install ipex-llm with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install datasets soundfile librosa # required by audio processing
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install datasets soundfile librosa # required by audio processing
```
@ -92,7 +120,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
# Long Segment Recognize
python ./long-segment-recognize.py --audio-file /PATH/TO/AUDIO_FILE
```

View file

@ -8,11 +8,24 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a WizardCoder-Python model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -31,7 +44,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -10,14 +10,28 @@ In the example [generate.py](./generate.py), we show a basic use case for an Yi
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install einops # additional package required for Yi-6B to conduct generation
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install einops
```
### 2. Run
```
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
@ -34,7 +48,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -12,15 +12,30 @@ In the example [generate.py](./generate.py), we show a basic use case for an Yua
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install einops # additional package required for Yuan2 to conduct generation
pip install pandas # additional package required for Yuan2 to conduct generation
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install einops
pip install pandas
```
### 2. Run
```
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
@ -37,7 +52,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -15,14 +15,28 @@ In the example [generate.py](./generate.py), we show a basic use case for a Ziya
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install einops # additional package required for Ziya to conduct generation
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install einops
```
### 2. Run
After setting up the Python environment, you could run the example by following steps.
@ -32,7 +46,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'def quick_sort(arr):\n'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -8,7 +8,7 @@ We suggest using conda to manage environment:
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
## Run Example

View file

@ -8,7 +8,7 @@ We suggest using conda to manage environment:
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
## Run Example

View file

@ -9,15 +9,29 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a ChatGLM3 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install ipex-llm with 'all' option
# install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
# Refer to https://github.com/modelscope/modelscope/issues/765, please make sure you are using 1.11.0 version
pip install modelscope==1.11.0
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install modelscope==1.11.0
```
### 2. Run
```
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
@ -34,7 +48,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -6,7 +6,20 @@ In this example, we show a pipeline to convert a large language model to IPEX-LL
## Prepare Environment
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm

View file

@ -10,11 +10,24 @@ In the example [generate.py](./generate.py), we show a basic use case for a Aqui
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -22,7 +35,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'What is AI?'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -10,11 +10,26 @@ In the example [synthesize_speech.py](./synthesize_speech.py), we show a basic u
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install TTS scipy
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install TTS scipy
```
@ -34,7 +49,7 @@ After setting up the Python environment and downloading Bark model, you could ru
#### 3.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
# make sure `--model-path` corresponds to the local folder of downloaded model
python ./synthesize_speech.py --model-path 'bark/' --text "This is an example text for synthesize speech."
```

View file

@ -10,19 +10,31 @@ In the example [extract_feature.py](./extract_feature.py), we show a basic use c
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
After setting up the Python environment, you could run the example by following steps.
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./extract_feature.py --text 'This is an example text for feature extraction.'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section.

View file

@ -10,11 +10,24 @@ In the example [generate.py](./generate.py), we show a basic use case for a Blue
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -22,7 +35,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'AI是什么'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -10,12 +10,27 @@ In the example [generate.py](./generate.py), we show a basic use case for a Chat
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
pip install "transformers<4.34.1" # chatglm cannot work with transformers 4.34.1+
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install "transformers<4.34.1"
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install "transformers<4.34.1"
```
### 2. Run
@ -23,7 +38,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'AI是什么'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -10,11 +10,24 @@ In the example [generate.py](./generate.py), we show a basic use case for a Chat
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -22,7 +35,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'AI是什么'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -10,23 +10,36 @@ In the example [generate.py](./generate.py), we show a basic use case for a Code
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# install ipex-llm with 'all' option
pip install --pre --upgrade ipex-llm[all]
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
# According to CodeGemma's requirement, please make sure you are using a stable version of Transformers, 4.38.1 or newer.
pip install transformers==4.38.1
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.38.1
```
### 2. Run
After setting up the Python environment, you could run the example by following steps.
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -10,20 +10,35 @@ In the example [generate.py](./generate.py), we show a basic use case for a Code
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install transformers==4.34.1 # CodeLlamaTokenizer is supported in higher version of transformers
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.34.1
```
### 2. Run
After setting up the Python environment, you could run the example by following steps.
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'def print_hello_world():'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -10,11 +10,24 @@ In the example [generate.py](./generate.py), we show a basic use case for a Code
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -22,7 +35,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'def print_hello_world():'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -8,12 +8,27 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [generate.py](./generate.py), we show a basic use case for a cohere model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install transformers==4.40.0
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install ipex-llm with 'all' option
pip install tansformers==4.40.0
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.40.0
```
### 2. Run
@ -32,7 +47,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -10,20 +10,34 @@ In the example [generate.py](./generate.py), we show a basic use case for a Deci
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install transformers==4.35.2 # required by DeciLM-7B
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.35.2
```
### 2. Run
After setting up the Python environment, you could run the example by following steps.
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -10,11 +10,25 @@ In the example [generate.py](./generate.py), we show a basic use case for a deep
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install einops
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install einops
```
@ -23,7 +37,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'What is AI?'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -10,11 +10,24 @@ In the example [generate.py](./generate.py), we show a basic use case for a Deep
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -22,7 +35,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'What is AI?'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -11,12 +11,26 @@ In the example [recognize.py](./recognize.py), we show a basic use case for a Di
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install datasets soundfile librosa # required by audio processing
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
pip install datasets soundfile librosa # required by audio processing
pip install --pre --upgrade ipex-llm[all]
pip install datasets soundfile librosa
```
### 2. Run
@ -28,7 +42,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./recognize.py
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -11,11 +11,24 @@ In the example [generate.py](./generate.py), we show a basic use case for a Flan
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -27,7 +40,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'Translate to German: My name is Arthur'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -10,15 +10,30 @@ In the example [generate.py](./generate.py), we show a basic use case for an Fuy
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install transformers==4.35 pillow # additional package required for Fuyu to conduct generation
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.35 pillow
```
### 2. Run
After setting up the Python environment, you could run the example by following steps.
@ -28,7 +43,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --image-path demo.jpg
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -10,14 +10,28 @@ In the example [chat.py](./chat.py), we show a basic use case for an InternLM_XC
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install accelerate timm==0.4.12 sentencepiece==0.1.99 gradio==3.44.4 markdown2==2.4.10 xlsxwriter==3.1.2 einops # additional package required for InternLM_XComposer to conduct generation
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install accelerate timm==0.4.12 sentencepiece==0.1.99 gradio==3.44.4 markdown2==2.4.10 xlsxwriter==3.1.2 einops
```
### 2. Download Model and Replace File
@ -49,7 +63,7 @@ After setting up the Python environment, you could run the example by following
#### 3.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./chat.py --image-path demo.jpg
```
More information about arguments can be found in [Arguments Info](#33-arguments-info) section. The expected output can be found in [Sample Output](#34-sample-output) section.

View file

@ -10,11 +10,24 @@ In the example [generate.py](./generate.py), we show a basic use case for a Inte
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -22,7 +35,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'What is AI?'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -10,11 +10,24 @@ In the example [generate.py](./generate.py), we show a basic use case for a Llam
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -22,7 +35,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'What is AI?'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -10,22 +10,37 @@ In the example [generate.py](./generate.py), we show a basic use case for a Llam
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
# transformers>=4.33.0 is required for Llama3 with IPEX-LLM optimizations
pip install transformers==4.37.0
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.37.0
```
### 2. Run
After setting up the Python environment, you could run the example by following steps.
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'What is AI?'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -11,11 +11,15 @@ In the example [generate.py](./generate.py), we show a basic use case for a LLaV
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install einops # install dependencies required by llava
pip install transformers==4.36.2
@ -25,6 +29,22 @@ cd LLaVA # change the working directory to the LLaVA folder
git checkout tags/v1.2.0 -b 1.2.0 # Get the branch which is compatible with transformers 4.36
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install einops
pip install transformers==4.36.2
git clone https://github.com/haotian-liu/LLaVA.git
copy generate.py .\LLaVA\
cd LLaVA
git checkout tags/v1.2.0 -b 1.2.0
```
### 2. Run
After setting up the Python environment, you could run the example by following steps.
@ -34,7 +54,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --image-path-or-url 'https://llava-vl.github.io/static/images/monalisa.jpg'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -10,20 +10,34 @@ In the example [generate.py](./generate.py), we show a basic use case for a Mamb
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install einops # package required by Mamba
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install einops
```
### 2. Run
After setting up the Python environment, you could run the example by following steps.
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```

View file

@ -9,6 +9,9 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [example_chat_completion.py](./example_chat_completion.py), we show a basic use case for a Llama model to engage in a conversation with an AI assistant using `chat_completion` API, with IPEX-LLM INT4 optimizations. The process for [example_text_completion.py](./example_text_completion.py) is similar.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11
conda activate llm
@ -19,8 +22,22 @@ cd llama/
git apply < ../cpu.patch # apply cpu version patch
pip install -e .
cd -
pip install ipex-llm[all] # install ipex-llm with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
git clone https://github.com/facebookresearch/llama.git
cd llama/
git apply < ../cpu.patch
pip install -e .
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -46,7 +63,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
torchrun --nproc-per-node 1 example_chat_completion.py --ckpt_dir llama-2-7b-chat/ --tokenizer_path tokenizer.model --max_seq_len 64 --max_batch_size 1 --backend cpu
```

View file

@ -12,22 +12,35 @@ In the example [generate.py](./generate.py), we show a basic use case for a Mist
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
# Refer to https://huggingface.co/mistralai/Mistral-7B-v0.1#troubleshooting, please make sure you are using a stable version of Transformers, 4.34.0 or newer.
pip install transformers==4.34.0
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.34.0
```
### 2. Run
After setting up the Python environment, you could run the example by following steps.
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'What is AI?'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -12,18 +12,31 @@ In the example [generate.py](./generate.py), we show a basic use case for a Mixt
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
# below command will install PyTorch CPU as default
pip install torch==2.0.1 --index-url https://download.pytorch.org/whl/cpu
pip install --pre --upgrade ipex-llm[all]
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
# Please make sure you are using a stable version of Transformers, 4.36.0 or newer.
pip install transformers==4.36.0
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.36.0
```
### 2. Run
```bash

View file

@ -9,15 +9,30 @@ To run these examples with IPEX-LLM, we have some recommended requirements for y
In the example [recognize.py](./recognize.py), we show a basic use case for a Whisper model to conduct transcription using `transcribe()` API, with IPEX-LLM INT4 optimizations.
### 1. Install
We suggest using conda to manage environment:
On Linux:
```bash
conda create -n llm python=3.11
conda activate llm
pip install ipex-llm[all] # install ipex-llm with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install -U openai-whisper
pip install librosa # required by audio processing
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install -U openai-whisper
pip install librosa
```
### 2. Run
```
python ./recognize.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --repo-id-or-data-path REPO_ID_OR_DATA_PATH --language LANGUAGE
@ -35,7 +50,7 @@ Arguments info:
#### 2.1 Client
On client Windows machine, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./recognize.py --audio-file /PATH/TO/AUDIO_FILE
```

View file

@ -10,11 +10,25 @@ In the example [generate.py](./generate.py), we show a basic use case for a phi-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install einops
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install einops
```
@ -23,7 +37,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'What is AI?'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -10,11 +10,25 @@ In the example [generate.py](./generate.py), we show a basic use case for a phi-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install einops
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install einops
```
@ -23,7 +37,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'What is AI?'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -15,11 +15,26 @@ In the example [generate.py](./generate.py), we show a basic use case for a phi-
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install transformers==4.37.0
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.37.0
```
@ -29,7 +44,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'What is AI?'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -10,11 +10,25 @@ In the example [generate.py](./generate.py), we show a basic use case for a phix
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install einops
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install einops
```
@ -23,7 +37,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'What is AI?'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -10,14 +10,26 @@ In the example [chat.py](./chat.py), we show a basic use case for a Qwen-VL mode
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install accelerate tiktoken einops transformers_stream_generator==0.0.4 scipy torchvision pillow tensorboard matplotlib # additional package required for Qwen-VL-Chat to conduct generation
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install accelerate tiktoken einops transformers_stream_generator==0.0.4 scipy torchvision pillow tensorboard matplotlib
```
### 2. Run
@ -25,7 +37,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./chat.py
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -10,11 +10,15 @@ In the example [generate.py](./generate.py), we show a basic use case for a Qwen
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install transformers==4.37.0 # install transformers which supports Qwen2
# only for Qwen1.5-MoE-A2.7B
@ -22,12 +26,26 @@ pip install transformers==4.40.0
pip install trl==0.8.1
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.37.0
REM for Qwen1.5-MoE-A2.7B
pip install transformers==4.40.0
pip install trl==0.8.1
```
### 2. Run
After setting up the Python environment, you could run the example by following steps.
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'What is AI?'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -10,11 +10,24 @@ In the example [generate.py](./generate.py), we show a basic use case for a Skyw
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
```
### 2. Run
@ -22,7 +35,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -10,20 +10,34 @@ In the example [generate.py](./generate.py), we show a basic use case for a SOLA
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install transformers==4.35.2 # required by SOLAR
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.35.2
```
### 2. Run
After setting up the Python environment, you could run the example by following steps.
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

View file

@ -10,16 +10,31 @@ In the example [generate.py](./generate.py), we show a basic use case for a Stab
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
After installing conda, create a Python environment for IPEX-LLM:
On Linux:
```bash
conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option
# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
# Refer to https://huggingface.co/stabilityai/stablelm-zephyr-3b/blob/8b471c751c0e78cb46cf9f47738dd0eb45392071/config.json#L21, please make sure you are using a stable version of Transformers, 4.38.0 or newer.
pip install transformers==4.38.0
```
On Windows:
```cmd
conda create -n llm python=3.11
conda activate llm
pip install --pre --upgrade ipex-llm[all]
pip install transformers==4.38.0
```
### 2. Run
After setting up the Python environment, you could run the example by following steps.
@ -29,7 +44,7 @@ After setting up the Python environment, you could run the example by following
#### 2.1 Client
On client Windows machines, it is recommended to run directly with full utilization of all cores:
```powershell
```cmd
python ./generate.py --prompt 'What is AI?'
```
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.

Some files were not shown because too many files have changed in this diff Show more