From 842d6dfc2d67529547589d799789fbb9cdee1ad1 Mon Sep 17 00:00:00 2001 From: ZehuaCao <47251317+Romanticoseu@users.noreply.github.com> Date: Tue, 21 May 2024 13:55:47 +0800 Subject: [PATCH] Further Modify CPU example (#11081) * modify CPU example * update --- python/llm/example/CPU/LangChain/README.md | 2 +- python/llm/example/CPU/QLoRA-FineTuning/README.md | 2 +- python/llm/example/CPU/README.md | 2 +- python/llm/example/CPU/Speculative-Decoding/README.md | 2 +- python/llm/example/CPU/Speculative-Decoding/baichuan2/README.md | 2 +- python/llm/example/CPU/Speculative-Decoding/chatglm3/README.md | 2 +- python/llm/example/CPU/Speculative-Decoding/llama2/README.md | 2 +- python/llm/example/CPU/Speculative-Decoding/llama3/README.md | 2 +- python/llm/example/CPU/Speculative-Decoding/mistral/README.md | 2 +- python/llm/example/CPU/Speculative-Decoding/mixtral/README.md | 2 +- python/llm/example/CPU/Speculative-Decoding/qwen/README.md | 2 +- python/llm/example/CPU/Speculative-Decoding/starcoder/README.md | 2 +- python/llm/example/CPU/Speculative-Decoding/vicuna/README.md | 2 +- python/llm/example/CPU/Speculative-Decoding/ziya/README.md | 2 +- python/llm/example/CPU/vLLM-Serving/README.md | 2 +- 15 files changed, 15 insertions(+), 15 deletions(-) diff --git a/python/llm/example/CPU/LangChain/README.md b/python/llm/example/CPU/LangChain/README.md index 775083a2..a31c4e13 100644 --- a/python/llm/example/CPU/LangChain/README.md +++ b/python/llm/example/CPU/LangChain/README.md @@ -4,7 +4,7 @@ This folder contains examples showcasing how to use `langchain` with `ipex-llm`. ### Install-IPEX LLM -Ensure `ipex-llm` is installed by following the [IPEX-LLM Installation Guide](https://github.com/intel-analytics/ipex-llm/tree/main/python/llm#install). +Ensure `ipex-llm` is installed by following the [IPEX-LLM Installation Guide](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Overview/install_cpu.html). ### Install Dependences Required by the Examples diff --git a/python/llm/example/CPU/QLoRA-FineTuning/README.md b/python/llm/example/CPU/QLoRA-FineTuning/README.md index e03e5683..5744ebf2 100644 --- a/python/llm/example/CPU/QLoRA-FineTuning/README.md +++ b/python/llm/example/CPU/QLoRA-FineTuning/README.md @@ -18,7 +18,7 @@ This example is ported from [bnb-4bit-training](https://colab.research.google.co ```bash conda create -n llm python=3.11 conda activate llm -pip install --pre --upgrade ipex-llm[all] +pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu pip install transformers==4.36.0 pip install peft==0.10.0 pip install datasets diff --git a/python/llm/example/CPU/README.md b/python/llm/example/CPU/README.md index 54b78896..a0b59bfc 100644 --- a/python/llm/example/CPU/README.md +++ b/python/llm/example/CPU/README.md @@ -27,6 +27,6 @@ This folder contains examples of running IPEX-LLM on Intel CPU: ## Best Known Configuration on Linux For better performance, it is recommended to set environment variables on Linux with the help of IPEX-LLM: ```bash -pip install ipex-llm +pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu source ipex-llm-init ``` diff --git a/python/llm/example/CPU/Speculative-Decoding/README.md b/python/llm/example/CPU/Speculative-Decoding/README.md index de6dcebe..8d603d2a 100644 --- a/python/llm/example/CPU/Speculative-Decoding/README.md +++ b/python/llm/example/CPU/Speculative-Decoding/README.md @@ -9,7 +9,7 @@ You can use IPEX-LLM to run BF16 inference for any Huggingface Transformer model To run these examples with IPEX-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#system-support) for more information. Make sure you have installed `ipex-llm` before: ```bash -pip install --pre --upgrade ipex-llm[all] +pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu ``` Moreover, install IPEX 2.1.0, which can be done through `pip install intel_extension_for_pytorch==2.1.0`. diff --git a/python/llm/example/CPU/Speculative-Decoding/baichuan2/README.md b/python/llm/example/CPU/Speculative-Decoding/baichuan2/README.md index 9e13d373..95a1320c 100644 --- a/python/llm/example/CPU/Speculative-Decoding/baichuan2/README.md +++ b/python/llm/example/CPU/Speculative-Decoding/baichuan2/README.md @@ -11,7 +11,7 @@ We suggest using conda to manage environment: ```bash conda create -n llm python=3.11 conda activate llm -pip install --pre --upgrade ipex-llm[all] +pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu pip install intel_extension_for_pytorch==2.1.0 pip install transformers==4.31.0 ``` diff --git a/python/llm/example/CPU/Speculative-Decoding/chatglm3/README.md b/python/llm/example/CPU/Speculative-Decoding/chatglm3/README.md index 333a6263..9dfe58fd 100644 --- a/python/llm/example/CPU/Speculative-Decoding/chatglm3/README.md +++ b/python/llm/example/CPU/Speculative-Decoding/chatglm3/README.md @@ -9,7 +9,7 @@ We suggest using conda to manage environment: ```bash conda create -n llm python=3.11 conda activate llm -pip install --pre --upgrade ipex-llm[all] +pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu ``` ### 2. Configures OneAPI environment variables ```bash diff --git a/python/llm/example/CPU/Speculative-Decoding/llama2/README.md b/python/llm/example/CPU/Speculative-Decoding/llama2/README.md index 34646bcc..418b59e5 100644 --- a/python/llm/example/CPU/Speculative-Decoding/llama2/README.md +++ b/python/llm/example/CPU/Speculative-Decoding/llama2/README.md @@ -11,7 +11,7 @@ We suggest using conda to manage environment: ```bash conda create -n llm python=3.11 conda activate llm -pip install --pre --upgrade ipex-llm[all] +pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu pip install intel_extension_for_pytorch==2.1.0 ``` ### 2. Configures high-performing processor environment variables diff --git a/python/llm/example/CPU/Speculative-Decoding/llama3/README.md b/python/llm/example/CPU/Speculative-Decoding/llama3/README.md index 35d7eec0..0e0a83bb 100644 --- a/python/llm/example/CPU/Speculative-Decoding/llama3/README.md +++ b/python/llm/example/CPU/Speculative-Decoding/llama3/README.md @@ -17,7 +17,7 @@ We suggest using conda to manage environment: ```bash conda create -n llm python=3.11 conda activate llm -pip install --pre --upgrade ipex-llm[all] +pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu # transformers>=4.33.0 is required for Llama3 with IPEX-LLM optimizations pip install transformers==4.37.0 ``` diff --git a/python/llm/example/CPU/Speculative-Decoding/mistral/README.md b/python/llm/example/CPU/Speculative-Decoding/mistral/README.md index 6f824d2b..5cb56942 100644 --- a/python/llm/example/CPU/Speculative-Decoding/mistral/README.md +++ b/python/llm/example/CPU/Speculative-Decoding/mistral/README.md @@ -11,7 +11,7 @@ We suggest using conda to manage environment: ```bash conda create -n llm python=3.11 conda activate llm -pip install --pre --upgrade ipex-llm[all] +pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu pip install intel_extension_for_pytorch==2.1.0 pip install transformers==4.35.2 ``` diff --git a/python/llm/example/CPU/Speculative-Decoding/mixtral/README.md b/python/llm/example/CPU/Speculative-Decoding/mixtral/README.md index 446d95a5..fa1ccd3b 100644 --- a/python/llm/example/CPU/Speculative-Decoding/mixtral/README.md +++ b/python/llm/example/CPU/Speculative-Decoding/mixtral/README.md @@ -11,7 +11,7 @@ We suggest using conda to manage environment: ```bash conda create -n llm python=3.11 conda activate llm -pip install --pre --upgrade ipex-llm[all] +pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu pip install transformers==4.36.0 ``` ### 2. Configures high-performing processor environment variables diff --git a/python/llm/example/CPU/Speculative-Decoding/qwen/README.md b/python/llm/example/CPU/Speculative-Decoding/qwen/README.md index ec5866f0..f6582640 100644 --- a/python/llm/example/CPU/Speculative-Decoding/qwen/README.md +++ b/python/llm/example/CPU/Speculative-Decoding/qwen/README.md @@ -10,7 +10,7 @@ We suggest using conda to manage environment: ```bash conda create -n llm python=3.11 conda activate llm -pip install --pre --upgrade ipex-llm[all] +pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu pip install tiktoken einops transformers_stream_generator # additional package required for Qwen to conduct generation ``` ### 2. Configures environment variables diff --git a/python/llm/example/CPU/Speculative-Decoding/starcoder/README.md b/python/llm/example/CPU/Speculative-Decoding/starcoder/README.md index eab5fd8a..d061bdad 100644 --- a/python/llm/example/CPU/Speculative-Decoding/starcoder/README.md +++ b/python/llm/example/CPU/Speculative-Decoding/starcoder/README.md @@ -11,7 +11,7 @@ We suggest using conda to manage environment: ```bash conda create -n llm python=3.11 conda activate llm -pip install --pre --upgrade ipex-llm[all] +pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu pip install intel_extension_for_pytorch==2.1.0 pip install transformers==4.31.0 ``` diff --git a/python/llm/example/CPU/Speculative-Decoding/vicuna/README.md b/python/llm/example/CPU/Speculative-Decoding/vicuna/README.md index bd85910f..c97e6baa 100644 --- a/python/llm/example/CPU/Speculative-Decoding/vicuna/README.md +++ b/python/llm/example/CPU/Speculative-Decoding/vicuna/README.md @@ -11,7 +11,7 @@ We suggest using conda to manage environment: ```bash conda create -n llm python=3.11 conda activate llm -pip install --pre --upgrade ipex-llm[all] +pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu pip install intel_extension_for_pytorch==2.1.0 ``` ### 2. Configures high-performing processor environment variables diff --git a/python/llm/example/CPU/Speculative-Decoding/ziya/README.md b/python/llm/example/CPU/Speculative-Decoding/ziya/README.md index 837aa357..6fabf672 100644 --- a/python/llm/example/CPU/Speculative-Decoding/ziya/README.md +++ b/python/llm/example/CPU/Speculative-Decoding/ziya/README.md @@ -11,7 +11,7 @@ We suggest using conda to manage environment: ```bash conda create -n llm python=3.11 conda activate llm -pip install --pre --upgrade ipex-llm[all] +pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu pip install intel_extension_for_pytorch==2.1.0 pip install transformers==4.35.2 ``` diff --git a/python/llm/example/CPU/vLLM-Serving/README.md b/python/llm/example/CPU/vLLM-Serving/README.md index b7933112..e2e18b97 100644 --- a/python/llm/example/CPU/vLLM-Serving/README.md +++ b/python/llm/example/CPU/vLLM-Serving/README.md @@ -18,7 +18,7 @@ conda create -n ipex-vllm python=3.11 conda activate ipex-vllm # Install dependencies pip3 install numpy -pip3 install --pre --upgrade ipex-llm[all] +pip3 install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu pip3 install psutil pip3 install sentencepiece # Required for LLaMA tokenizer. pip3 install fastapi