Update llama example information (#12640)
Co-authored-by: ATMxsp01 <shou.xu@intel.com>
This commit is contained in:
		
							parent
							
								
									81211fd010
								
							
						
					
					
						commit
						62318964fa
					
				
					 4 changed files with 9 additions and 15 deletions
				
			
		| 
						 | 
					@ -1,5 +1,5 @@
 | 
				
			||||||
# Llama3.1
 | 
					# Llama3.1
 | 
				
			||||||
In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on Llama3.1 models on [Intel GPUs](../../../README.md). For illustration purposes, we utilize the [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) as a reference Llama3.1 model.
 | 
					In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on Llama3.1 models on [Intel GPUs](../../../README.md). For illustration purposes, we utilize the [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) as a reference Llama3.1 model.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
## 0. Requirements
 | 
					## 0. Requirements
 | 
				
			||||||
To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
 | 
					To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
 | 
				
			||||||
| 
						 | 
					@ -104,12 +104,12 @@ python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROM
 | 
				
			||||||
```
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
Arguments info:
 | 
					Arguments info:
 | 
				
			||||||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Llama3.1 model (e.g. `meta-llama/Meta-Llama-3.1-8B-Instruct`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'meta-llama/Meta-Llama-3.1-8B-Instruct'`.
 | 
					- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Llama3.1 model (e.g. `meta-llama/Llama-3.1-8B-Instruct`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'meta-llama/Llama-3.1-8B-Instruct'`.
 | 
				
			||||||
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
 | 
					- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
 | 
				
			||||||
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
 | 
					- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
#### Sample Output
 | 
					#### Sample Output
 | 
				
			||||||
#### [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)
 | 
					#### [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct)
 | 
				
			||||||
```log
 | 
					```log
 | 
				
			||||||
Inference time: xxxx s
 | 
					Inference time: xxxx s
 | 
				
			||||||
-------------------- Prompt --------------------
 | 
					-------------------- Prompt --------------------
 | 
				
			||||||
| 
						 | 
					
 | 
				
			||||||
| 
						 | 
					@ -42,8 +42,8 @@ def get_prompt(user_input: str, chat_history: list[tuple[str, str]],
 | 
				
			||||||
 | 
					
 | 
				
			||||||
if __name__ == '__main__':
 | 
					if __name__ == '__main__':
 | 
				
			||||||
    parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for Llama3.1 model')
 | 
					    parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for Llama3.1 model')
 | 
				
			||||||
    parser.add_argument('--repo-id-or-model-path', type=str, default="meta-llama/Meta-Llama-3.1-8B-Instruct",
 | 
					    parser.add_argument('--repo-id-or-model-path', type=str, default="meta-llama/Llama-3.1-8B-Instruct",
 | 
				
			||||||
                        help='The huggingface repo id for the Llama3 (e.g. `meta-llama/Meta-Llama-3.1-8B-Instruct`) to be downloaded'
 | 
					                        help='The huggingface repo id for the Llama3 (e.g. `meta-llama/Llama-3.1-8B-Instruct`) to be downloaded'
 | 
				
			||||||
                             ', or the path to the huggingface checkpoint folder')
 | 
					                             ', or the path to the huggingface checkpoint folder')
 | 
				
			||||||
    parser.add_argument('--prompt', type=str, default="What is AI?",
 | 
					    parser.add_argument('--prompt', type=str, default="What is AI?",
 | 
				
			||||||
                        help='Prompt to infer')
 | 
					                        help='Prompt to infer')
 | 
				
			||||||
| 
						 | 
					
 | 
				
			||||||
| 
						 | 
					@ -1,5 +1,5 @@
 | 
				
			||||||
# Llama3.2
 | 
					# Llama3.2
 | 
				
			||||||
In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on Llama3.2 models on [Intel GPUs](../../../README.md). For illustration purposes, we utilize the [meta-llama/Meta-Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.2-3B-Instruct) and [meta-llama/Meta-Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.2-1B-Instruct) as reference Llama3.2 models.
 | 
					In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on Llama3.2 models on [Intel GPUs](../../../README.md). For illustration purposes, we utilize the [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) and [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) as reference Llama3.2 models.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
## 0. Requirements
 | 
					## 0. Requirements
 | 
				
			||||||
To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
 | 
					To run these examples with IPEX-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
 | 
				
			||||||
| 
						 | 
					@ -104,12 +104,12 @@ python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROM
 | 
				
			||||||
```
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
Arguments info:
 | 
					Arguments info:
 | 
				
			||||||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Llama3.2 model (e.g. `meta-llama/Meta-Llama-3.2-3B-Instruct`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'meta-llama/Meta-Llama-3.2-3B-Instruct'`.
 | 
					- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Llama3.2 model (e.g. `meta-llama/Llama-3.2-3B-Instruct`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'meta-llama/Llama-3.2-3B-Instruct'`.
 | 
				
			||||||
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
 | 
					- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
 | 
				
			||||||
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
 | 
					- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
 | 
				
			||||||
 | 
					
 | 
				
			||||||
#### Sample Output
 | 
					#### Sample Output
 | 
				
			||||||
#### [meta-llama/Meta-Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.2-3B-Instruct)
 | 
					#### [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct)
 | 
				
			||||||
```log
 | 
					```log
 | 
				
			||||||
Inference time: xxxx s
 | 
					Inference time: xxxx s
 | 
				
			||||||
-------------------- Prompt --------------------
 | 
					-------------------- Prompt --------------------
 | 
				
			||||||
| 
						 | 
					@ -126,7 +126,7 @@ What is AI?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
 | 
				
			||||||
Artificial Intelligence (AI) refers to the development of computer systems that can perform tasks that would typically require human intelligence, such as learning, problem-solving, and
 | 
					Artificial Intelligence (AI) refers to the development of computer systems that can perform tasks that would typically require human intelligence, such as learning, problem-solving, and
 | 
				
			||||||
```
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
#### [meta-llama/Meta-Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.2-1B-Instruct)
 | 
					#### [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
 | 
				
			||||||
```log
 | 
					```log
 | 
				
			||||||
Inference time: xxxx s
 | 
					Inference time: xxxx s
 | 
				
			||||||
-------------------- Prompt --------------------
 | 
					-------------------- Prompt --------------------
 | 
				
			||||||
| 
						 | 
					
 | 
				
			||||||
| 
						 | 
					@ -14,9 +14,6 @@ conda create -n llm python=3.11
 | 
				
			||||||
conda activate llm
 | 
					conda activate llm
 | 
				
			||||||
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
 | 
					# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
 | 
				
			||||||
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
 | 
					pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
 | 
				
			||||||
 | 
					 | 
				
			||||||
# transformers>=4.33.0 is required for Llama3 with IPEX-LLM optimizations
 | 
					 | 
				
			||||||
pip install transformers==4.37.0 
 | 
					 | 
				
			||||||
```
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
#### 1.2 Installation on Windows
 | 
					#### 1.2 Installation on Windows
 | 
				
			||||||
| 
						 | 
					@ -27,9 +24,6 @@ conda activate llm
 | 
				
			||||||
 | 
					
 | 
				
			||||||
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
 | 
					# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
 | 
				
			||||||
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
 | 
					pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
 | 
				
			||||||
 | 
					 | 
				
			||||||
# transformers>=4.33.0 is required for Llama3 with IPEX-LLM optimizations
 | 
					 | 
				
			||||||
pip install transformers==4.37.0 
 | 
					 | 
				
			||||||
```
 | 
					```
 | 
				
			||||||
 | 
					
 | 
				
			||||||
### 2. Configures OneAPI environment variables for Linux
 | 
					### 2. Configures OneAPI environment variables for Linux
 | 
				
			||||||
| 
						 | 
					
 | 
				
			||||||
		Loading…
	
		Reference in a new issue