add official Mixtral model support (#9663)
This commit is contained in:
		
							parent
							
								
									8931f2eb62
								
							
						
					
					
						commit
						bf1bcf4a14
					
				
					 4 changed files with 24 additions and 10 deletions
				
			
		| 
						 | 
				
			
			@ -1,5 +1,5 @@
 | 
			
		|||
# Mixtral
 | 
			
		||||
In this directory, you will find examples on how you could apply BigDL-LLM INT4 optimizations on Mixtral models on [Intel GPUs](../README.md). For illustration purposes, we utilize the [DiscoResearch/mixtral-7b-8expert](https://huggingface.co/DiscoResearch/mixtral-7b-8expert) as a reference Mixtral model.
 | 
			
		||||
In this directory, you will find examples on how you could apply BigDL-LLM INT4 optimizations on Mixtral models on [Intel GPUs](../README.md). For illustration purposes, we utilize the [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) and [DiscoResearch/mixtral-7b-8expert](https://huggingface.co/DiscoResearch/mixtral-7b-8expert) as reference Mixtral models.
 | 
			
		||||
 | 
			
		||||
## Requirements
 | 
			
		||||
To run these examples with BigDL-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
 | 
			
		||||
| 
						 | 
				
			
			@ -24,7 +24,7 @@ pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-w
 | 
			
		|||
pip install transformers==4.36.0
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
### 2. Download Model and Replace File
 | 
			
		||||
### (Optional) 2. Download Model and Replace File
 | 
			
		||||
To run [DiscoResearch/mixtral-7b-8expert](https://huggingface.co/DiscoResearch/mixtral-7b-8expert) model on Intel GPU, we have provided an updated version [DiscoResearch-mixtral-7b-8expert/modeling_moe_mistral.py](./DiscoResearch-mixtral-7b-8expert/modeling_moe_mistral.py) of `modeling_moe_mistral.py`.
 | 
			
		||||
 | 
			
		||||
#### 2.1 Download Model
 | 
			
		||||
| 
						 | 
				
			
			@ -61,11 +61,18 @@ python ./generate.py --prompt 'What is AI?'
 | 
			
		|||
 | 
			
		||||
In the example, several arguments can be passed to satisfy your requirements:
 | 
			
		||||
 | 
			
		||||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Mixtral model (e.g. `DiscoResearch/mixtral-7b-8expert`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'DiscoResearch/mixtral-7b-8expert'`. For model `DiscoResearch/mixtral-7b-8expert`, you should input the path to the model folder in which `modeling_moe_mistral.py` has been replaced.
 | 
			
		||||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Mixtral model (e.g. `mistralai/Mixtral-8x7B-Instruct-v0.1` and `DiscoResearch/mixtral-7b-8expert`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'mistralai/Mixtral-8x7B-Instruct-v0.1'`. For model `DiscoResearch/mixtral-7b-8expert`, you should input the path to the model folder in which `modeling_moe_mistral.py` has been replaced.
 | 
			
		||||
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
 | 
			
		||||
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
 | 
			
		||||
 | 
			
		||||
#### Sample Output
 | 
			
		||||
#### [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
 | 
			
		||||
```log
 | 
			
		||||
Inference time: xxxx s 
 | 
			
		||||
-------------------- Output --------------------
 | 
			
		||||
[INST] What is AI? [/INST] AI, or Artificial Intelligence, refers to the development of computer systems that can perform tasks that would normally require human intelligence to accomplish. These tasks can include things
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
#### [DiscoResearch/mixtral-7b-8expert](https://huggingface.co/DiscoResearch/mixtral-7b-8expert)
 | 
			
		||||
```log
 | 
			
		||||
Inference time: xxxx s
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -28,8 +28,8 @@ MIXTRAL_PROMPT_FORMAT = """<s>[INST] {prompt} [/INST]"""
 | 
			
		|||
 | 
			
		||||
if __name__ == '__main__':
 | 
			
		||||
    parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for Mixtral model')
 | 
			
		||||
    parser.add_argument('--repo-id-or-model-path', type=str, default="'DiscoResearch/mixtral-7b-8expert'",
 | 
			
		||||
                        help='The huggingface repo id for the Mixtral (e.g. `DiscoResearch/mixtral-7b-8expert`) to be downloaded,'
 | 
			
		||||
    parser.add_argument('--repo-id-or-model-path', type=str, default="'mistralai/Mixtral-8x7B-Instruct-v0.1'",
 | 
			
		||||
                        help='The huggingface repo id for the Mixtral (e.g. `mistralai/Mixtral-8x7B-Instruct-v0.1` and `DiscoResearch/mixtral-7b-8expert`) to be downloaded,'
 | 
			
		||||
                             ', or the path to the huggingface checkpoint folder. For model `DiscoResearch/mixtral-7b-8expert`, '
 | 
			
		||||
                             'you should input the path to the model folder in which `modeling_moe_mistral.py` has been replaced.')
 | 
			
		||||
    parser.add_argument('--prompt', type=str, default="What is AI?",
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -1,5 +1,5 @@
 | 
			
		|||
# Mixtral
 | 
			
		||||
In this directory, you will find examples on how you could use BigDL-LLM `optimize_model` API to accelerate Mixtral models. For illustration purposes, we utilize the DiscoResearch/mixtral-7b-8expert(https://huggingface.co/DiscoResearch/mixtral-7b-8expert) as a reference Mixtral model.
 | 
			
		||||
In this directory, you will find examples on how you could use BigDL-LLM `optimize_model` API to accelerate Mixtral models. For illustration purposes, we utilize the [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) and [DiscoResearch/mixtral-7b-8expert](https://huggingface.co/DiscoResearch/mixtral-7b-8expert) as reference Mixtral models.
 | 
			
		||||
 | 
			
		||||
## Requirements
 | 
			
		||||
To run these examples with BigDL-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
 | 
			
		||||
| 
						 | 
				
			
			@ -24,7 +24,7 @@ pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-w
 | 
			
		|||
pip install transformers==4.36.0
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
### 2. Download Model and Replace File
 | 
			
		||||
### (Optional) 2. Download Model and Replace File
 | 
			
		||||
To run [DiscoResearch/mixtral-7b-8expert](https://huggingface.co/DiscoResearch/mixtral-7b-8expert) model on Intel GPU, we have provided an updated version [DiscoResearch-mixtral-7b-8expert/modeling_moe_mistral.py](./DiscoResearch-mixtral-7b-8expert/modeling_moe_mistral.py) of `modeling_moe_mistral.py`.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			@ -62,11 +62,18 @@ python ./generate.py --prompt 'What is AI?'
 | 
			
		|||
 | 
			
		||||
In the example, several arguments can be passed to satisfy your requirements:
 | 
			
		||||
 | 
			
		||||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Mixtral model (e.g. `DiscoResearch/mixtral-7b-8expert`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'DiscoResearch/mixtral-7b-8expert'`. For model `DiscoResearch/mixtral-7b-8expert`, you should input the path to the model folder in which `modeling_moe_mistral.py` has been replaced.
 | 
			
		||||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Mixtral model (e.g. `mistralai/Mixtral-8x7B-Instruct-v0.1` and `DiscoResearch/mixtral-7b-8expert`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'mistralai/Mixtral-8x7B-Instruct-v0.1'`. For model `DiscoResearch/mixtral-7b-8expert`, you should input the path to the model folder in which `modeling_moe_mistral.py` has been replaced.
 | 
			
		||||
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
 | 
			
		||||
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
 | 
			
		||||
 | 
			
		||||
#### Sample Output
 | 
			
		||||
#### [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
 | 
			
		||||
```log
 | 
			
		||||
Inference time: xxxx s 
 | 
			
		||||
-------------------- Output --------------------
 | 
			
		||||
[INST] What is AI? [/INST] AI, or Artificial Intelligence, refers to the development of computer systems that can perform tasks that would normally require human intelligence to accomplish. These tasks can include things
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
#### [DiscoResearch/mixtral-7b-8expert](https://huggingface.co/DiscoResearch/mixtral-7b-8expert)
 | 
			
		||||
```log
 | 
			
		||||
Inference time: xxxx s
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -28,8 +28,8 @@ MIXTRAL_PROMPT_FORMAT = """<s>[INST] {prompt} [/INST]"""
 | 
			
		|||
 | 
			
		||||
if __name__ == '__main__':
 | 
			
		||||
    parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for Mixtral model')
 | 
			
		||||
    parser.add_argument('--repo-id-or-model-path', type=str, default="'DiscoResearch/mixtral-7b-8expert'",
 | 
			
		||||
                        help='The huggingface repo id for the Mixtral (e.g. `DiscoResearch/mixtral-7b-8expert`) to be downloaded,'
 | 
			
		||||
    parser.add_argument('--repo-id-or-model-path', type=str, default="'mistralai/Mixtral-8x7B-Instruct-v0.1'",
 | 
			
		||||
                        help='The huggingface repo id for the Mixtral (e.g. `mistralai/Mixtral-8x7B-Instruct-v0.1` and `DiscoResearch/mixtral-7b-8expert`) to be downloaded,'
 | 
			
		||||
                             ', or the path to the huggingface checkpoint folder. For model `DiscoResearch/mixtral-7b-8expert`, '
 | 
			
		||||
                             'you should input the path to the model folder in which `modeling_moe_mistral.py` has been replaced.')
 | 
			
		||||
    parser.add_argument('--prompt', type=str, default="What is AI?",
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
		Loading…
	
		Reference in a new issue