add phixtral and optimize phi-moe (#10052)
This commit is contained in:
		
							parent
							
								
									676d6923f2
								
							
						
					
					
						commit
						7d2be7994f
					
				
					 12 changed files with 834 additions and 0 deletions
				
			
		| 
						 | 
				
			
			@ -177,6 +177,7 @@ Over 20 models have been optimized/verified on `bigdl-llm`, including *LLaMA/LLa
 | 
			
		|||
| Yi | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/yi) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/yi) |
 | 
			
		||||
| BlueLM | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/bluelm) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/bluelm) |
 | 
			
		||||
| SOLAR | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/solar) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/solar) |
 | 
			
		||||
| Phixtral | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/phixtral) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/phixtral) |
 | 
			
		||||
| InternLM2 | [link](python/llm/example/CPU/HF-Transformers-AutoModels/Model/internlm2) | [link](python/llm/example/GPU/HF-Transformers-AutoModels/Model/internlm2) |
 | 
			
		||||
 | 
			
		||||
***For more details, please refer to the `bigdl-llm` [Document](https://test-bigdl-llm.readthedocs.io/en/main/doc/LLM/index.html), [Readme](python/llm), [Tutorial](https://github.com/intel-analytics/bigdl-llm-tutorial) and [API Doc](https://bigdl.readthedocs.io/en/latest/doc/PythonAPI/LLM/index.html).***
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -75,6 +75,7 @@ Over 20 models have been optimized/verified on `bigdl-llm`, including *LLaMA/LLa
 | 
			
		|||
| Yi | [link](example/CPU/HF-Transformers-AutoModels/Model/yi) | [link](example/GPU/HF-Transformers-AutoModels/Model/yi) |
 | 
			
		||||
| BlueLM | [link](example/CPU/HF-Transformers-AutoModels/Model/bluelm) | [link](example/GPU/HF-Transformers-AutoModels/Model/bluelm) |
 | 
			
		||||
| SOLAR | [link](example/CPU/HF-Transformers-AutoModels/Model/solar) | [link](example/GPU/HF-Transformers-AutoModels/Model/solar) |
 | 
			
		||||
| Phixtral | [link](example/CPU/HF-Transformers-AutoModels/Model/phixtral) | [link](example/GPU/HF-Transformers-AutoModels/Model/phixtral) |
 | 
			
		||||
| InternLM2 | [link](example/CPU/HF-Transformers-AutoModels/Model/internlm2) | [link](example/GPU/HF-Transformers-AutoModels/Model/internlm2) |
 | 
			
		||||
 | 
			
		||||
### Working with `bigdl-llm`
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
| 
						 | 
				
			
			@ -0,0 +1,73 @@
 | 
			
		|||
# Phixtral-4x2_8
 | 
			
		||||
 | 
			
		||||
In this directory, you will find examples on how you could apply BigDL-LLM INT4 optimizations on phi models. For illustration purposes, we utilize the [microsoft/phixtral-4x2_8](https://huggingface.co/mlabonne/phixtral-4x2_8) as a reference phixtral model.
 | 
			
		||||
 | 
			
		||||
> **Note**: If you want to download the Hugging Face *Transformers* model, please refer to [here](https://huggingface.co/docs/hub/models-downloading#using-git).
 | 
			
		||||
>
 | 
			
		||||
> BigDL-LLM optimizes the *Transformers* model in INT4 precision at runtime, and thus no explicit conversion is needed.
 | 
			
		||||
 | 
			
		||||
## Requirements
 | 
			
		||||
To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
 | 
			
		||||
 | 
			
		||||
## Example: Predict Tokens using `generate()` API
 | 
			
		||||
In the example [generate.py](./generate.py), we show a basic use case for a phixtral model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations.
 | 
			
		||||
### 1. Install
 | 
			
		||||
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
 | 
			
		||||
 | 
			
		||||
After installing conda, create a Python environment for BigDL-LLM:
 | 
			
		||||
```bash
 | 
			
		||||
conda create -n llm python=3.9 # recommend to use Python 3.9
 | 
			
		||||
conda activate llm
 | 
			
		||||
 | 
			
		||||
pip install --pre --upgrade bigdl-llm[all] # install the latest bigdl-llm nightly build with 'all' option
 | 
			
		||||
pip install einops  # additional package required for phi to conduct generation
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
### 2. Run
 | 
			
		||||
After setting up the Python environment, you could run the example by following steps.
 | 
			
		||||
 | 
			
		||||
> **Note**: When loading the model in 4-bit, BigDL-LLM converts linear layers in the model into INT4 format. In theory, a *X*B model saved in 16-bit will requires approximately 2*X* GB of memory for loading, and ~0.5*X* GB memory for further inference.
 | 
			
		||||
>
 | 
			
		||||
> Please select the appropriate size of the phixtral model based on the capabilities of your machine.
 | 
			
		||||
 | 
			
		||||
#### 2.1 Client
 | 
			
		||||
On client Windows machines, it is recommended to run directly with full utilization of all cores:
 | 
			
		||||
```powershell
 | 
			
		||||
python ./generate.py --prompt 'What is AI?'
 | 
			
		||||
```
 | 
			
		||||
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
 | 
			
		||||
 | 
			
		||||
#### 2.2 Server
 | 
			
		||||
For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket.
 | 
			
		||||
 | 
			
		||||
E.g. on Linux,
 | 
			
		||||
```bash
 | 
			
		||||
# set BigDL-LLM env variables
 | 
			
		||||
source bigdl-llm-init
 | 
			
		||||
 | 
			
		||||
# e.g. for a server with 48 cores per socket
 | 
			
		||||
export OMP_NUM_THREADS=48
 | 
			
		||||
numactl -C 0-47 -m 0 python ./generate.py --prompt 'What is AI?'
 | 
			
		||||
```
 | 
			
		||||
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
 | 
			
		||||
 | 
			
		||||
#### 2.3 Arguments Info
 | 
			
		||||
In the example, several arguments can be passed to satisfy your requirements:
 | 
			
		||||
 | 
			
		||||
- `--repo-id-or-model-path`: str, argument defining the huggingface repo id for the phixtral model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'mlabonne/phixtral-4x2_8'`.
 | 
			
		||||
- `--prompt`: str, argument defining the prompt to be inferred (with integrated prompt format for chat). It is default to be `What is AI?`.
 | 
			
		||||
- `--n-predict`: int, argument defining the max number of tokens to predict. It is default to be `32`.
 | 
			
		||||
 | 
			
		||||
#### 2.4 Sample Output
 | 
			
		||||
#### [mlabonne/phixtral-4x2_8](https://huggingface.co/mlabonne/phixtral-4x2_8)
 | 
			
		||||
```log
 | 
			
		||||
Inference time: xxxx s
 | 
			
		||||
-------------------- Prompt --------------------
 | 
			
		||||
Question:What is AI?
 | 
			
		||||
 | 
			
		||||
Answer:
 | 
			
		||||
-------------------- Output --------------------
 | 
			
		||||
Question:What is AI?
 | 
			
		||||
 | 
			
		||||
Answer: AI, or artificial intelligence, is the simulation of human intelligence in machines that are programmed to think and learn like humans.
 | 
			
		||||
```
 | 
			
		||||
| 
						 | 
				
			
			@ -0,0 +1,72 @@
 | 
			
		|||
#
 | 
			
		||||
# Copyright 2016 The BigDL Authors.
 | 
			
		||||
#
 | 
			
		||||
# Licensed under the Apache License, Version 2.0 (the "License");
 | 
			
		||||
# you may not use this file except in compliance with the License.
 | 
			
		||||
# You may obtain a copy of the License at
 | 
			
		||||
#
 | 
			
		||||
#     http://www.apache.org/licenses/LICENSE-2.0
 | 
			
		||||
#
 | 
			
		||||
# Unless required by applicable law or agreed to in writing, software
 | 
			
		||||
# distributed under the License is distributed on an "AS IS" BASIS,
 | 
			
		||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 | 
			
		||||
# See the License for the specific language governing permissions and
 | 
			
		||||
# limitations under the License.
 | 
			
		||||
#
 | 
			
		||||
 | 
			
		||||
import torch
 | 
			
		||||
import time
 | 
			
		||||
import argparse
 | 
			
		||||
import numpy as np
 | 
			
		||||
 | 
			
		||||
from transformers import AutoTokenizer, GenerationConfig
 | 
			
		||||
from bigdl.llm import optimize_model
 | 
			
		||||
# you could tune the prompt based on your own model,
 | 
			
		||||
# here the prompt tuning refers to  # TODO: https://huggingface.co/microsoft/phi-1_5/blob/main/modeling_mixformer_sequential.py
 | 
			
		||||
PHI1_5_PROMPT_FORMAT = " Question:{prompt}\n\n Answer:"
 | 
			
		||||
generation_config = GenerationConfig(use_cache = True)
 | 
			
		||||
 | 
			
		||||
if __name__ == '__main__':
 | 
			
		||||
    parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for phixtral model')
 | 
			
		||||
    parser.add_argument('--repo-id-or-model-path', type=str, default="mlabonne/phixtral-4x2_8",
 | 
			
		||||
                        help='The huggingface repo id for the phi model to be downloaded'
 | 
			
		||||
                             ', or the path to the huggingface checkpoint folder')
 | 
			
		||||
    parser.add_argument('--prompt', type=str, default="What is AI?",
 | 
			
		||||
                        help='Prompt to infer')
 | 
			
		||||
    parser.add_argument('--n-predict', type=int, default=32,
 | 
			
		||||
                        help='Max tokens to predict')
 | 
			
		||||
 | 
			
		||||
    args = parser.parse_args()
 | 
			
		||||
    model_path = args.repo_id_or_model_path
 | 
			
		||||
 | 
			
		||||
    # Load model in 4 bit,
 | 
			
		||||
    # which convert the relevant layers in the model into INT4 format
 | 
			
		||||
    from bigdl.llm.transformers import AutoModelForCausalLM
 | 
			
		||||
    model = AutoModelForCausalLM.from_pretrained(model_path,
 | 
			
		||||
                                                 load_in_4bit=True,
 | 
			
		||||
                                                 trust_remote_code=True)
 | 
			
		||||
    
 | 
			
		||||
    # Load tokenizer
 | 
			
		||||
    tokenizer = AutoTokenizer.from_pretrained(model_path,
 | 
			
		||||
                                              trust_remote_code=True)
 | 
			
		||||
    
 | 
			
		||||
    # Generate predicted tokens
 | 
			
		||||
    with torch.inference_mode():
 | 
			
		||||
        prompt = PHI1_5_PROMPT_FORMAT.format(prompt=args.prompt)
 | 
			
		||||
        input_ids = tokenizer.encode(prompt, return_tensors="pt")
 | 
			
		||||
        st = time.time()
 | 
			
		||||
        # if your selected model is capable of utilizing previous key/value attentions
 | 
			
		||||
        # to enhance decoding speed, but has `"use_cache": false` in its model config,
 | 
			
		||||
        # it is important to set `use_cache=True` explicitly in the `generate` function
 | 
			
		||||
        # to obtain optimal performance with BigDL-LLM INT4 optimizations
 | 
			
		||||
 | 
			
		||||
        # Note that phixtral uses GenerationConfig to enable 'use_cache'
 | 
			
		||||
        output = model.generate(input_ids, do_sample=False, max_new_tokens=args.n_predict, generation_config = generation_config)
 | 
			
		||||
 | 
			
		||||
        end = time.time()
 | 
			
		||||
        output_str = tokenizer.decode(output[0], skip_special_tokens=True)
 | 
			
		||||
        print(f'Inference time: {end-st} s')
 | 
			
		||||
        print('-'*20, 'Prompt', '-'*20)
 | 
			
		||||
        print(prompt)
 | 
			
		||||
        print('-'*20, 'Output', '-'*20)
 | 
			
		||||
        print(output_str)
 | 
			
		||||
| 
						 | 
				
			
			@ -0,0 +1,64 @@
 | 
			
		|||
# Phixtral
 | 
			
		||||
In this directory, you will find examples on how you could use BigDL-LLM `optimize_model` API to accelerate Qwen-VL models. For illustration purposes, we utilize the [mlabonne/phixtral-4x2_8](https://huggingface.co/mlabonne/phixtral-4x2_8) as a reference Phixtral model.
 | 
			
		||||
 | 
			
		||||
## Requirements
 | 
			
		||||
To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information.
 | 
			
		||||
 | 
			
		||||
## Example: Predict Tokens using `generate()` API
 | 
			
		||||
In the example [generate.py](./generate.py), we show a basic use case for a phixtral model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations.
 | 
			
		||||
### 1. Install
 | 
			
		||||
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
 | 
			
		||||
 | 
			
		||||
After installing conda, create a Python environment for BigDL-LLM:
 | 
			
		||||
```bash
 | 
			
		||||
conda create -n llm python=3.9 # recommend to use Python 3.9
 | 
			
		||||
conda activate llm
 | 
			
		||||
 | 
			
		||||
pip install --pre --upgrade bigdl-llm[all] # install the latest bigdl-llm nightly build with 'all' option
 | 
			
		||||
pip install einops 
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
### 2. Run
 | 
			
		||||
After setting up the Python environment, you could run the example by following steps.
 | 
			
		||||
 | 
			
		||||
#### 2.1 Client
 | 
			
		||||
On client Windows machines, it is recommended to run directly with full utilization of all cores:
 | 
			
		||||
```powershell
 | 
			
		||||
python ./generate.py --prompt 'What is AI?'
 | 
			
		||||
```
 | 
			
		||||
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
 | 
			
		||||
 | 
			
		||||
#### 2.2 Server
 | 
			
		||||
For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket.
 | 
			
		||||
 | 
			
		||||
E.g. on Linux,
 | 
			
		||||
```bash
 | 
			
		||||
# set BigDL-LLM env variables
 | 
			
		||||
source bigdl-llm-init
 | 
			
		||||
 | 
			
		||||
# e.g. for a server with 48 cores per socket
 | 
			
		||||
export OMP_NUM_THREADS=48
 | 
			
		||||
numactl -C 0-47 -m 0 python ./generate.py --prompt 'What is AI?'
 | 
			
		||||
```
 | 
			
		||||
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section.
 | 
			
		||||
 | 
			
		||||
#### 2.3 Arguments Info
 | 
			
		||||
In the example, several arguments can be passed to satisfy your requirements:
 | 
			
		||||
 | 
			
		||||
- `--repo-id-or-model-path`: str, argument defining the huggingface repo id for the phixtral model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'mlabonne/phixtral-4x2_8'`.
 | 
			
		||||
- `--prompt`: str, argument defining the prompt to be inferred (with integrated prompt format for chat). It is default to be `'What is AI?'`.
 | 
			
		||||
- `--n-predict`: int, argument defining the max number of tokens to predict. It is default to be `32`.
 | 
			
		||||
 | 
			
		||||
#### 2.4 Sample Output
 | 
			
		||||
#### [mlabonne/phixtral-4x2_8](https://huggingface.co/mlabonne/phixtral-4x2_8)
 | 
			
		||||
```log
 | 
			
		||||
Inference time: xxxx s
 | 
			
		||||
-------------------- Prompt --------------------
 | 
			
		||||
Question:What is AI?
 | 
			
		||||
 | 
			
		||||
Answer:
 | 
			
		||||
-------------------- Output --------------------
 | 
			
		||||
Question:What is AI?
 | 
			
		||||
 | 
			
		||||
Answer: AI, or artificial intelligence, is the simulation of human intelligence in machines that are programmed to think and learn like humans.
 | 
			
		||||
```
 | 
			
		||||
| 
						 | 
				
			
			@ -0,0 +1,66 @@
 | 
			
		|||
#
 | 
			
		||||
# Copyright 2016 The BigDL Authors.
 | 
			
		||||
#
 | 
			
		||||
# Licensed under the Apache License, Version 2.0 (the "License");
 | 
			
		||||
# you may not use this file except in compliance with the License.
 | 
			
		||||
# You may obtain a copy of the License at
 | 
			
		||||
#
 | 
			
		||||
#     http://www.apache.org/licenses/LICENSE-2.0
 | 
			
		||||
#
 | 
			
		||||
# Unless required by applicable law or agreed to in writing, software
 | 
			
		||||
# distributed under the License is distributed on an "AS IS" BASIS,
 | 
			
		||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 | 
			
		||||
# See the License for the specific language governing permissions and
 | 
			
		||||
# limitations under the License.
 | 
			
		||||
#
 | 
			
		||||
 | 
			
		||||
import torch
 | 
			
		||||
import time
 | 
			
		||||
import argparse
 | 
			
		||||
import numpy as np
 | 
			
		||||
 | 
			
		||||
from transformers import AutoTokenizer, GenerationConfig
 | 
			
		||||
from bigdl.llm import optimize_model
 | 
			
		||||
# you could tune the prompt based on your own model,
 | 
			
		||||
# here the prompt tuning refers to  # TODO: https://huggingface.co/microsoft/phi-1_5/blob/main/modeling_mixformer_sequential.py
 | 
			
		||||
PHI1_5_PROMPT_FORMAT = " Question:{prompt}\n\n Answer:"
 | 
			
		||||
generation_config = GenerationConfig(use_cache = True)
 | 
			
		||||
 | 
			
		||||
if __name__ == '__main__':
 | 
			
		||||
    parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for phixtral model')
 | 
			
		||||
    parser.add_argument('--repo-id-or-model-path', type=str, default="mlabonne/phixtral-4x2_8",
 | 
			
		||||
                        help='The huggingface repo id for the phi model to be downloaded'
 | 
			
		||||
                             ', or the path to the huggingface checkpoint folder')
 | 
			
		||||
    parser.add_argument('--prompt', type=str, default="What is AI?",
 | 
			
		||||
                        help='Prompt to infer')
 | 
			
		||||
    parser.add_argument('--n-predict', type=int, default=32,
 | 
			
		||||
                        help='Max tokens to predict')
 | 
			
		||||
 | 
			
		||||
    args = parser.parse_args()
 | 
			
		||||
    model_path = args.repo_id_or_model_path
 | 
			
		||||
    
 | 
			
		||||
    from transformers import AutoModelForCausalLM
 | 
			
		||||
    model = AutoModelForCausalLM.from_pretrained(model_path,
 | 
			
		||||
                                                 trust_remote_code=True)
 | 
			
		||||
    model = optimize_model(model)
 | 
			
		||||
 | 
			
		||||
    # Load tokenizer
 | 
			
		||||
    tokenizer = AutoTokenizer.from_pretrained(model_path,
 | 
			
		||||
                                              trust_remote_code=True)
 | 
			
		||||
    
 | 
			
		||||
    # Generate predicted tokens
 | 
			
		||||
    with torch.inference_mode():
 | 
			
		||||
        prompt = PHI1_5_PROMPT_FORMAT.format(prompt=args.prompt)
 | 
			
		||||
        input_ids = tokenizer.encode(prompt, return_tensors="pt")
 | 
			
		||||
        st = time.time()
 | 
			
		||||
 | 
			
		||||
        # Note that phixtral uses GenerationConfig to enable 'use_cache'
 | 
			
		||||
        output = model.generate(input_ids, do_sample=False, max_new_tokens=args.n_predict, generation_config = generation_config)
 | 
			
		||||
 | 
			
		||||
        end = time.time()
 | 
			
		||||
        output_str = tokenizer.decode(output[0], skip_special_tokens=True)
 | 
			
		||||
        print(f'Inference time: {end-st} s')
 | 
			
		||||
        print('-'*20, 'Prompt', '-'*20)
 | 
			
		||||
        print(prompt)
 | 
			
		||||
        print('-'*20, 'Output', '-'*20)
 | 
			
		||||
        print(output_str)
 | 
			
		||||
| 
						 | 
				
			
			@ -0,0 +1,119 @@
 | 
			
		|||
# Phixtral
 | 
			
		||||
In this directory, you will find examples on how you could apply BigDL-LLM INT4 optimizations on phixtral models on [Intel GPUs](../../../README.md). For illustration purposes, we utilize the [mlabonne/phixtral-4x2_8](https://huggingface.co/mlabonne/phixtral-4x2_8) as a reference phixtral model.
 | 
			
		||||
 | 
			
		||||
## 0. Requirements
 | 
			
		||||
To run these examples with BigDL-LLM on Intel GPUs, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
 | 
			
		||||
 | 
			
		||||
## Example: Predict Tokens using `generate()` API
 | 
			
		||||
In the example [generate.py](./generate.py), we show a basic use case for a InternLM model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations on Intel GPUs.
 | 
			
		||||
### 1. Install
 | 
			
		||||
#### 1.1 Installation on Linux
 | 
			
		||||
We suggest using conda to manage environment:
 | 
			
		||||
```bash
 | 
			
		||||
conda create -n llm python=3.9
 | 
			
		||||
conda activate llm
 | 
			
		||||
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
 | 
			
		||||
pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
#### 1.2 Installation on Windows
 | 
			
		||||
We suggest using conda to manage environment:
 | 
			
		||||
```bash
 | 
			
		||||
conda create -n llm python=3.9 libuv
 | 
			
		||||
conda activate llm
 | 
			
		||||
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
 | 
			
		||||
pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
### 2. Configures OneAPI environment variables
 | 
			
		||||
#### 2.1 Configurations for Linux
 | 
			
		||||
```bash
 | 
			
		||||
source /opt/intel/oneapi/setvars.sh
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
#### 2.2 Configurations for Windows
 | 
			
		||||
```cmd
 | 
			
		||||
call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat"
 | 
			
		||||
```
 | 
			
		||||
> Note: Please make sure you are using **CMD** (**Anaconda Prompt** if using conda) to run the command as PowerShell is not supported.
 | 
			
		||||
### 3. Runtime Configurations
 | 
			
		||||
For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device.
 | 
			
		||||
#### 3.1 Configurations for Linux
 | 
			
		||||
<details>
 | 
			
		||||
 | 
			
		||||
<summary>For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series</summary>
 | 
			
		||||
 | 
			
		||||
```bash
 | 
			
		||||
export USE_XETLA=OFF
 | 
			
		||||
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
</details>
 | 
			
		||||
 | 
			
		||||
<details>
 | 
			
		||||
 | 
			
		||||
<summary>For Intel Data Center GPU Max Series</summary>
 | 
			
		||||
 | 
			
		||||
```bash
 | 
			
		||||
export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so
 | 
			
		||||
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
 | 
			
		||||
export ENABLE_SDP_FUSION=1
 | 
			
		||||
```
 | 
			
		||||
> Note: Please note that `libtcmalloc.so` can be installed by `conda install -c conda-forge -y gperftools=2.10`.
 | 
			
		||||
</details>
 | 
			
		||||
 | 
			
		||||
#### 3.2 Configurations for Windows
 | 
			
		||||
<details>
 | 
			
		||||
 | 
			
		||||
<summary>For Intel iGPU</summary>
 | 
			
		||||
 | 
			
		||||
```cmd
 | 
			
		||||
set SYCL_CACHE_PERSISTENT=1
 | 
			
		||||
set BIGDL_LLM_XMX_DISABLED=1
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
</details>
 | 
			
		||||
 | 
			
		||||
<details>
 | 
			
		||||
 | 
			
		||||
<summary>For Intel Arc™ A300-Series or Pro A60</summary>
 | 
			
		||||
 | 
			
		||||
```cmd
 | 
			
		||||
set SYCL_CACHE_PERSISTENT=1
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
</details>
 | 
			
		||||
 | 
			
		||||
<details>
 | 
			
		||||
 | 
			
		||||
<summary>For other Intel dGPU Series</summary>
 | 
			
		||||
 | 
			
		||||
There is no need to set further environment variables.
 | 
			
		||||
 | 
			
		||||
</details>
 | 
			
		||||
 | 
			
		||||
> Note: For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.
 | 
			
		||||
### 4. Running examples
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
Arguments info:
 | 
			
		||||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the phi model (e.g. `mlabonne/phixtral-4x2_8`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'mlabonne/phixtral-4x2_8'`.
 | 
			
		||||
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
 | 
			
		||||
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
 | 
			
		||||
 | 
			
		||||
#### Sample Output
 | 
			
		||||
#### [mlabonne/phixtral-4x2_8](https://huggingface.co/mlabonne/phixtral-4x2_8)
 | 
			
		||||
```log
 | 
			
		||||
Inference time: xxxx s
 | 
			
		||||
-------------------- Prompt --------------------
 | 
			
		||||
Question:What is AI?
 | 
			
		||||
 | 
			
		||||
Answer:
 | 
			
		||||
-------------------- Output --------------------
 | 
			
		||||
Question:What is AI?
 | 
			
		||||
 | 
			
		||||
Answer: AI, or artificial intelligence, is the simulation of human intelligence in machines that are programmed to think and learn like humans. It involves the development of computer systems that
 | 
			
		||||
```
 | 
			
		||||
| 
						 | 
				
			
			@ -0,0 +1,80 @@
 | 
			
		|||
#
 | 
			
		||||
# Copyright 2016 The BigDL Authors.
 | 
			
		||||
#
 | 
			
		||||
# Licensed under the Apache License, Version 2.0 (the "License");
 | 
			
		||||
# you may not use this file except in compliance with the License.
 | 
			
		||||
# You may obtain a copy of the License at
 | 
			
		||||
#
 | 
			
		||||
#     http://www.apache.org/licenses/LICENSE-2.0
 | 
			
		||||
#
 | 
			
		||||
# Unless required by applicable law or agreed to in writing, software
 | 
			
		||||
# distributed under the License is distributed on an "AS IS" BASIS,
 | 
			
		||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 | 
			
		||||
# See the License for the specific language governing permissions and
 | 
			
		||||
# limitations under the License.
 | 
			
		||||
#
 | 
			
		||||
 | 
			
		||||
import torch
 | 
			
		||||
import time
 | 
			
		||||
import argparse
 | 
			
		||||
import numpy as np
 | 
			
		||||
 | 
			
		||||
from transformers import AutoTokenizer, GenerationConfig
 | 
			
		||||
import intel_extension_for_pytorch as ipex
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
# you could tune the prompt based on your own model,
 | 
			
		||||
# here the prompt tuning refers to  # TODO: https://huggingface.co/microsoft/phi-1_5/blob/main/modeling_mixformer_sequential.py
 | 
			
		||||
PHI1_5_PROMPT_FORMAT = " Question:{prompt}\n\n Answer:"
 | 
			
		||||
generation_config = GenerationConfig(use_cache = True)
 | 
			
		||||
 | 
			
		||||
if __name__ == '__main__':
 | 
			
		||||
    parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for phi model')
 | 
			
		||||
    parser.add_argument('--repo-id-or-model-path', type=str, default="mlabonne/phixtral-4x2_8",
 | 
			
		||||
                        help='The huggingface repo id for the phixtral model to be downloaded'
 | 
			
		||||
                             ', or the path to the huggingface checkpoint folder')
 | 
			
		||||
    parser.add_argument('--prompt', type=str, default="What is AI?",
 | 
			
		||||
                        help='Prompt to infer')
 | 
			
		||||
    parser.add_argument('--n-predict', type=int, default=32,
 | 
			
		||||
                        help='Max tokens to predict')
 | 
			
		||||
 | 
			
		||||
    args = parser.parse_args()
 | 
			
		||||
    model_path = args.repo_id_or_model_path
 | 
			
		||||
 | 
			
		||||
    # Load model in 4 bit,
 | 
			
		||||
    # which convert the relevant layers in the model into INT4 format
 | 
			
		||||
    # When running LLMs on Intel iGPUs for Windows users, we recommend setting `cpu_embedding=True` in the from_pretrained function.
 | 
			
		||||
    # This will allow the memory-intensive embedding layer to utilize the CPU instead of iGPU.
 | 
			
		||||
    from bigdl.llm.transformers import AutoModel,AutoModelForCausalLM
 | 
			
		||||
    model = AutoModelForCausalLM.from_pretrained(model_path,
 | 
			
		||||
                                                 load_in_4bit=True,
 | 
			
		||||
                                                 trust_remote_code=True)
 | 
			
		||||
    model = model.to('xpu')
 | 
			
		||||
 | 
			
		||||
    # Load tokenizer
 | 
			
		||||
    tokenizer = AutoTokenizer.from_pretrained(model_path,
 | 
			
		||||
                                              trust_remote_code=True)
 | 
			
		||||
    
 | 
			
		||||
    # Generate predicted tokens
 | 
			
		||||
    # for phi-moe
 | 
			
		||||
    with torch.inference_mode():
 | 
			
		||||
        prompt = PHI1_5_PROMPT_FORMAT.format(prompt=args.prompt)
 | 
			
		||||
        input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu')
 | 
			
		||||
 | 
			
		||||
        # ipex model needs a warmup, then inference time can be accurate
 | 
			
		||||
        output = model.generate(input_ids,
 | 
			
		||||
                                max_new_tokens=args.n_predict,
 | 
			
		||||
                                generation_config = generation_config)
 | 
			
		||||
        
 | 
			
		||||
        # start inference without profiling
 | 
			
		||||
        st = time.time()
 | 
			
		||||
        output = model.generate(input_ids, do_sample=False, max_new_tokens=args.n_predict, generation_config = generation_config)
 | 
			
		||||
        torch.xpu.synchronize()
 | 
			
		||||
        end = time.time()
 | 
			
		||||
        output = output.cpu()
 | 
			
		||||
        output_str = tokenizer.decode(output[0], skip_special_tokens=True)
 | 
			
		||||
        print(f'Inference time: {end-st} s')
 | 
			
		||||
        print('-'*20, 'Prompt', '-'*20)
 | 
			
		||||
        print(prompt)
 | 
			
		||||
        print('-'*20, 'Output', '-'*20)
 | 
			
		||||
        print(output_str)
 | 
			
		||||
							
								
								
									
										123
									
								
								python/llm/example/GPU/PyTorch-Models/Model/phixtral/README.md
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										123
									
								
								python/llm/example/GPU/PyTorch-Models/Model/phixtral/README.md
									
									
									
									
									
										Normal file
									
								
							| 
						 | 
				
			
			@ -0,0 +1,123 @@
 | 
			
		|||
# phixtral
 | 
			
		||||
In this directory, you will find examples on how you could use BigDL-LLM `optimize_model` API to accelerate phi-1_5 models on [Intel GPUs](../../../README.md). For illustration purposes, we utilize the [mlabonne/phixtral-4x2_8](https://huggingface.co/mlabonne/phixtral-4x2_8) as a reference phixtral model.
 | 
			
		||||
 | 
			
		||||
## Requirements
 | 
			
		||||
To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to [here](../../../README.md#requirements) for more information.
 | 
			
		||||
 | 
			
		||||
## Example: Predict Tokens using `generate()` API
 | 
			
		||||
In the example [generate.py](./generate.py), we show a basic use case for a phixtral model to predict the next N tokens using `generate()` API, with BigDL-LLM INT4 optimizations on Intel GPUs.
 | 
			
		||||
### 1. Install
 | 
			
		||||
#### 1.1 Installation on Linux
 | 
			
		||||
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#).
 | 
			
		||||
 | 
			
		||||
After installing conda, create a Python environment for BigDL-LLM:
 | 
			
		||||
```bash
 | 
			
		||||
conda create -n llm python=3.9 # recommend to use Python 3.9
 | 
			
		||||
conda activate llm
 | 
			
		||||
 | 
			
		||||
pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
 | 
			
		||||
pip install einops # additional package required for phixtral to conduct generation
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
#### 1.2 Installation on Windows
 | 
			
		||||
We suggest using conda to manage environment:
 | 
			
		||||
```bash
 | 
			
		||||
conda create -n llm python=3.9 libuv
 | 
			
		||||
conda activate llm
 | 
			
		||||
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
 | 
			
		||||
pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
 | 
			
		||||
pip install einops # additional package required for phixtral to conduct generation
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
### 2. Configures OneAPI environment variables
 | 
			
		||||
#### 2.1 Configurations for Linux
 | 
			
		||||
```bash
 | 
			
		||||
source /opt/intel/oneapi/setvars.sh
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
#### 2.2 Configurations for Windows
 | 
			
		||||
```cmd
 | 
			
		||||
call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat"
 | 
			
		||||
```
 | 
			
		||||
> Note: Please make sure you are using **CMD** (**Anaconda Prompt** if using conda) to run the command as PowerShell is not supported.
 | 
			
		||||
### 3. Runtime Configurations
 | 
			
		||||
For optimal performance, it is recommended to set several environment variables. Please check out the suggestions based on your device.
 | 
			
		||||
#### 3.1 Configurations for Linux
 | 
			
		||||
<details>
 | 
			
		||||
 | 
			
		||||
<summary>For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series</summary>
 | 
			
		||||
 | 
			
		||||
```bash
 | 
			
		||||
export USE_XETLA=OFF
 | 
			
		||||
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
</details>
 | 
			
		||||
 | 
			
		||||
<details>
 | 
			
		||||
 | 
			
		||||
<summary>For Intel Data Center GPU Max Series</summary>
 | 
			
		||||
 | 
			
		||||
```bash
 | 
			
		||||
export LD_PRELOAD=${LD_PRELOAD}:${CONDA_PREFIX}/lib/libtcmalloc.so
 | 
			
		||||
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
 | 
			
		||||
export ENABLE_SDP_FUSION=1
 | 
			
		||||
```
 | 
			
		||||
> Note: Please note that `libtcmalloc.so` can be installed by `conda install -c conda-forge -y gperftools=2.10`.
 | 
			
		||||
</details>
 | 
			
		||||
#### 3.2 Configurations for Windows
 | 
			
		||||
<details>
 | 
			
		||||
 | 
			
		||||
<summary>For Intel iGPU</summary>
 | 
			
		||||
 | 
			
		||||
```cmd
 | 
			
		||||
set SYCL_CACHE_PERSISTENT=1
 | 
			
		||||
set BIGDL_LLM_XMX_DISABLED=1
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
</details>
 | 
			
		||||
 | 
			
		||||
<details>
 | 
			
		||||
 | 
			
		||||
<summary>For Intel Arc™ A300-Series or Pro A60</summary>
 | 
			
		||||
 | 
			
		||||
```cmd
 | 
			
		||||
set SYCL_CACHE_PERSISTENT=1
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
</details>
 | 
			
		||||
 | 
			
		||||
<details>
 | 
			
		||||
 | 
			
		||||
<summary>For other Intel dGPU Series</summary>
 | 
			
		||||
 | 
			
		||||
There is no need to set further environment variables.
 | 
			
		||||
 | 
			
		||||
</details>
 | 
			
		||||
 | 
			
		||||
> Note: For the first time that each model runs on Intel iGPU/Intel Arc™ A300-Series or Pro A60, it may take several minutes to compile.
 | 
			
		||||
### 4. Running examples
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
python ./generate.py --prompt 'What is AI?'
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
Arguments info:
 | 
			
		||||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the phixtral model (e.g. `mlabonne/phixtral-4x2_8`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'mlabonne/phixtral-4x2_8'`.
 | 
			
		||||
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`.
 | 
			
		||||
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`.
 | 
			
		||||
 | 
			
		||||
#### Sample Output
 | 
			
		||||
#### [mlabonne/phixtral-4x2_8](https://huggingface.co/mlabonne/phixtral-4x2_8)
 | 
			
		||||
 | 
			
		||||
```log
 | 
			
		||||
Inference time: xxxx s
 | 
			
		||||
-------------------- Prompt --------------------
 | 
			
		||||
Question:What is AI?
 | 
			
		||||
 | 
			
		||||
Answer:
 | 
			
		||||
-------------------- Output --------------------
 | 
			
		||||
Question:What is AI?
 | 
			
		||||
 | 
			
		||||
Answer: AI, or artificial intelligence, is the simulation of human intelligence in machines that are programmed to think and learn like humans. It involves the development of computer systems that
 | 
			
		||||
```
 | 
			
		||||
| 
						 | 
				
			
			@ -0,0 +1,80 @@
 | 
			
		|||
#
 | 
			
		||||
# Copyright 2016 The BigDL Authors.
 | 
			
		||||
#
 | 
			
		||||
# Licensed under the Apache License, Version 2.0 (the "License");
 | 
			
		||||
# you may not use this file except in compliance with the License.
 | 
			
		||||
# You may obtain a copy of the License at
 | 
			
		||||
#
 | 
			
		||||
#     http://www.apache.org/licenses/LICENSE-2.0
 | 
			
		||||
#
 | 
			
		||||
# Unless required by applicable law or agreed to in writing, software
 | 
			
		||||
# distributed under the License is distributed on an "AS IS" BASIS,
 | 
			
		||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 | 
			
		||||
# See the License for the specific language governing permissions and
 | 
			
		||||
# limitations under the License.
 | 
			
		||||
#
 | 
			
		||||
 | 
			
		||||
import torch
 | 
			
		||||
import time
 | 
			
		||||
import argparse
 | 
			
		||||
import numpy as np
 | 
			
		||||
 | 
			
		||||
from transformers import AutoTokenizer, GenerationConfig
 | 
			
		||||
import intel_extension_for_pytorch as ipex
 | 
			
		||||
from bigdl.llm import optimize_model
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
# you could tune the prompt based on your own model,
 | 
			
		||||
# here the prompt tuning refers to  # TODO: https://huggingface.co/microsoft/phi-1_5/blob/main/modeling_mixformer_sequential.py
 | 
			
		||||
PHI1_5_PROMPT_FORMAT = " Question:{prompt}\n\n Answer:"
 | 
			
		||||
generation_config = GenerationConfig(use_cache = True)
 | 
			
		||||
 | 
			
		||||
if __name__ == '__main__':
 | 
			
		||||
    parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for phi model')
 | 
			
		||||
    parser.add_argument('--repo-id-or-model-path', type=str, default="mlabonne/phixtral-4x2_8",
 | 
			
		||||
                        help='The huggingface repo id for the phixtral model to be downloaded'
 | 
			
		||||
                             ', or the path to the huggingface checkpoint folder')
 | 
			
		||||
    parser.add_argument('--prompt', type=str, default="What is AI?",
 | 
			
		||||
                        help='Prompt to infer')
 | 
			
		||||
    parser.add_argument('--n-predict', type=int, default=32,
 | 
			
		||||
                        help='Max tokens to predict')
 | 
			
		||||
 | 
			
		||||
    args = parser.parse_args()
 | 
			
		||||
    model_path = args.repo_id_or_model_path
 | 
			
		||||
 | 
			
		||||
    
 | 
			
		||||
    # Load huggingface model with optimize_model in BigDL
 | 
			
		||||
    from transformers import AutoModelForCausalLM
 | 
			
		||||
    model = AutoModelForCausalLM.from_pretrained(model_path,
 | 
			
		||||
                                                 trust_remote_code=True)
 | 
			
		||||
    model = optimize_model(model)
 | 
			
		||||
 | 
			
		||||
    model = model.to('xpu')
 | 
			
		||||
 | 
			
		||||
    # Load tokenizer
 | 
			
		||||
    tokenizer = AutoTokenizer.from_pretrained(model_path,
 | 
			
		||||
                                              trust_remote_code=True)
 | 
			
		||||
    
 | 
			
		||||
    # Generate predicted tokens
 | 
			
		||||
    # for phi-moe
 | 
			
		||||
    with torch.inference_mode():
 | 
			
		||||
        prompt = PHI1_5_PROMPT_FORMAT.format(prompt=args.prompt)
 | 
			
		||||
        input_ids = tokenizer.encode(prompt, return_tensors="pt").to('xpu')
 | 
			
		||||
 | 
			
		||||
        # ipex model needs a warmup, then inference time can be accurate
 | 
			
		||||
        output = model.generate(input_ids,
 | 
			
		||||
                                max_new_tokens=args.n_predict,
 | 
			
		||||
                                generation_config = generation_config)
 | 
			
		||||
        
 | 
			
		||||
        # start inference without profiling
 | 
			
		||||
        st = time.time()
 | 
			
		||||
        output = model.generate(input_ids, do_sample=False, max_new_tokens=args.n_predict, generation_config = generation_config)
 | 
			
		||||
        torch.xpu.synchronize()
 | 
			
		||||
        end = time.time()
 | 
			
		||||
        output = output.cpu()
 | 
			
		||||
        output_str = tokenizer.decode(output[0], skip_special_tokens=True)
 | 
			
		||||
        print(f'Inference time: {end-st} s')
 | 
			
		||||
        print('-'*20, 'Prompt', '-'*20)
 | 
			
		||||
        print(prompt)
 | 
			
		||||
        print('-'*20, 'Output', '-'*20)
 | 
			
		||||
        print(output_str)
 | 
			
		||||
| 
						 | 
				
			
			@ -912,6 +912,17 @@ def _optimize_post(model, lightweight_bmm=False):
 | 
			
		|||
        convert_forward(model,
 | 
			
		||||
                        module.MixtralBLockSparseTop2MLP,
 | 
			
		||||
                        mixtral_mlp_forward)
 | 
			
		||||
    elif model.config.model_type == "phi-msft":
 | 
			
		||||
        modeling_module_name = model.__class__.__module__
 | 
			
		||||
        module = importlib.import_module(modeling_module_name)
 | 
			
		||||
        from bigdl.llm.transformers.models.phixtral import phixtral_moeblock_forward, \
 | 
			
		||||
            phixtral_mlp_forward
 | 
			
		||||
        convert_forward(model,
 | 
			
		||||
                        module.MoE,
 | 
			
		||||
                        phixtral_moeblock_forward)
 | 
			
		||||
        convert_forward(model,
 | 
			
		||||
                        module.MLP,
 | 
			
		||||
                        phixtral_mlp_forward)
 | 
			
		||||
    elif model.config.model_type == "mistral":
 | 
			
		||||
        if model.config.architectures is not None and \
 | 
			
		||||
                model.config.architectures[0] == "MixtralForCausalLM":
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
							
								
								
									
										144
									
								
								python/llm/src/bigdl/llm/transformers/models/phixtral.py
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										144
									
								
								python/llm/src/bigdl/llm/transformers/models/phixtral.py
									
									
									
									
									
										Normal file
									
								
							| 
						 | 
				
			
			@ -0,0 +1,144 @@
 | 
			
		|||
#
 | 
			
		||||
# Copyright 2016 The BigDL Authors.
 | 
			
		||||
#
 | 
			
		||||
# Licensed under the Apache License, Version 2.0 (the "License");
 | 
			
		||||
# you may not use this file except in compliance with the License.
 | 
			
		||||
# You may obtain a copy of the License at
 | 
			
		||||
#
 | 
			
		||||
#     http://www.apache.org/licenses/LICENSE-2.0
 | 
			
		||||
#
 | 
			
		||||
# Unless required by applicable law or agreed to in writing, software
 | 
			
		||||
# distributed under the License is distributed on an "AS IS" BASIS,
 | 
			
		||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 | 
			
		||||
# See the License for the specific language governing permissions and
 | 
			
		||||
# limitations under the License.
 | 
			
		||||
#
 | 
			
		||||
# Some parts of this file is adapted from
 | 
			
		||||
# https://github.com/huggingface/transformers/blob/main/src/transformers/models/mixtral/modeling_mixtral.py
 | 
			
		||||
 | 
			
		||||
# coding=utf-8
 | 
			
		||||
# Copyright 2023 Mistral AI and the HuggingFace Inc. team. All rights reserved.
 | 
			
		||||
#
 | 
			
		||||
# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
 | 
			
		||||
# and OPT implementations in this library. It has been modified from its
 | 
			
		||||
# original forms to accommodate minor architectural differences compared
 | 
			
		||||
# to GPT-NeoX and OPT used by the Meta AI team that trained the model.
 | 
			
		||||
#
 | 
			
		||||
# Licensed under the Apache License, Version 2.0 (the "License");
 | 
			
		||||
# you may not use this file except in compliance with the License.
 | 
			
		||||
# You may obtain a copy of the License at
 | 
			
		||||
#
 | 
			
		||||
#     http://www.apache.org/licenses/LICENSE-2.0
 | 
			
		||||
#
 | 
			
		||||
# Unless required by applicable law or agreed to in writing, software
 | 
			
		||||
# distributed under the License is distributed on an "AS IS" BASIS,
 | 
			
		||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 | 
			
		||||
# See the License for the specific language governing permissions and
 | 
			
		||||
# limitations under the License.
 | 
			
		||||
 | 
			
		||||
""" PyTorch Phixtral model."""
 | 
			
		||||
import math
 | 
			
		||||
from typing import Optional, Tuple
 | 
			
		||||
 | 
			
		||||
import torch
 | 
			
		||||
from torch import nn
 | 
			
		||||
import torch.nn.functional as F
 | 
			
		||||
from bigdl.llm.ggml.quantize import ggml_tensor_qtype
 | 
			
		||||
from bigdl.llm.utils.common import invalidInputError
 | 
			
		||||
from bigdl.llm.transformers.models.utils import init_kv_cache, extend_kv_cache, append_kv_cache
 | 
			
		||||
from bigdl.llm.transformers.models.utils import apply_rotary_pos_emb,\
 | 
			
		||||
    apply_rotary_pos_emb_no_cache_xpu, is_enough_kv_cache_room_4_36
 | 
			
		||||
from bigdl.llm.transformers.models.mistral import should_use_fuse_rope, use_decoding_fast_path
 | 
			
		||||
from bigdl.llm.transformers.models.utils import use_flash_attention
 | 
			
		||||
from bigdl.llm.transformers.models.utils import mlp_fusion_check
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
KV_CACHE_ALLOC_BLOCK_LENGTH = 256
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
 | 
			
		||||
    """
 | 
			
		||||
    This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep).
 | 
			
		||||
    The hidden states go from (batch, num_key_value_heads, seqlen, head_dim)
 | 
			
		||||
    to (batch, num_attention_heads, seqlen, head_dim)
 | 
			
		||||
    """
 | 
			
		||||
    batch, num_key_value_heads, slen, head_dim = hidden_states.shape
 | 
			
		||||
    if n_rep == 1:
 | 
			
		||||
        return hidden_states
 | 
			
		||||
    hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads,
 | 
			
		||||
                                                           n_rep, slen, head_dim)
 | 
			
		||||
    return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
def phixtral_moeblock_forward(self, hidden_states: torch.Tensor):
 | 
			
		||||
    batch_size, sequence_length, hidden_dim = hidden_states.shape
 | 
			
		||||
    hidden_states = hidden_states.view(-1, hidden_dim)
 | 
			
		||||
    bs = hidden_states.shape[0]
 | 
			
		||||
    # router_logits: (batch * sequence_length, n_experts)
 | 
			
		||||
    router_logits = self.gate(hidden_states)
 | 
			
		||||
 | 
			
		||||
    num_local_experts = len(self.mlp)
 | 
			
		||||
 | 
			
		||||
    routing_weights = F.softmax(router_logits, dim=1, dtype=torch.float)
 | 
			
		||||
    top_k = self.num_experts_per_tok
 | 
			
		||||
    routing_weights, selected_experts = torch.topk(routing_weights, top_k, dim=-1)
 | 
			
		||||
    routing_weights /= routing_weights.sum(dim=-1, keepdim=True)
 | 
			
		||||
    # we cast back to the input dtype
 | 
			
		||||
    routing_weights = routing_weights.to(hidden_states.dtype)
 | 
			
		||||
 | 
			
		||||
    if bs > 1:
 | 
			
		||||
        final_hidden_states = torch.zeros(
 | 
			
		||||
            (batch_size * sequence_length, hidden_dim),
 | 
			
		||||
            dtype=hidden_states.dtype,
 | 
			
		||||
            device=hidden_states.device
 | 
			
		||||
        )
 | 
			
		||||
        # One hot encode the selected experts to create an expert mask
 | 
			
		||||
        # this will be used to easily index which expert is going to be sollicitated
 | 
			
		||||
        expert_mask = torch.nn.functional.one_hot(selected_experts,
 | 
			
		||||
                                                  num_classes=num_local_experts).permute(2, 1, 0)
 | 
			
		||||
 | 
			
		||||
        # Loop over all available experts in the model and perform the computation on each expert
 | 
			
		||||
        for expert_idx in range(num_local_experts):
 | 
			
		||||
            expert_layer = self.mlp[expert_idx]
 | 
			
		||||
            idx, top_x = torch.where(expert_mask[expert_idx])
 | 
			
		||||
 | 
			
		||||
            if top_x.shape[0] == 0:
 | 
			
		||||
                continue
 | 
			
		||||
 | 
			
		||||
            # in torch it is faster to index using lists than torch tensors
 | 
			
		||||
            top_x_list = top_x.tolist()
 | 
			
		||||
            idx_list = idx.tolist()
 | 
			
		||||
 | 
			
		||||
            # Index the correct hidden states and compute the expert hidden state for
 | 
			
		||||
            # the current expert. We need to make sure to multiply the output hidden
 | 
			
		||||
            # states by `routing_weights` on the corresponding tokens (top-1 and top-2)
 | 
			
		||||
            current_state = hidden_states[None, top_x_list].reshape(-1, hidden_dim)
 | 
			
		||||
            current_hidden_states = expert_layer(current_state)
 | 
			
		||||
 | 
			
		||||
            # However `index_add_` only support torch tensors for indexing so we'll use
 | 
			
		||||
            # the `top_x` tensor here.
 | 
			
		||||
            final_hidden_states.index_add_(0, top_x, current_hidden_states.to(hidden_states.dtype))
 | 
			
		||||
    else:
 | 
			
		||||
        selected_experts = selected_experts[0].cpu().tolist()
 | 
			
		||||
        for idx in range(top_k):
 | 
			
		||||
            exp_id = selected_experts[idx]
 | 
			
		||||
            expert_layer = self.mlp[exp_id]
 | 
			
		||||
            weight = routing_weights[:, idx]
 | 
			
		||||
            if idx == 0:
 | 
			
		||||
                final_hidden_states = expert_layer(hidden_states)
 | 
			
		||||
            else:
 | 
			
		||||
                final_hidden_states = final_hidden_states + expert_layer(hidden_states)
 | 
			
		||||
 | 
			
		||||
    final_hidden_states = final_hidden_states.reshape(batch_size, sequence_length, hidden_dim)
 | 
			
		||||
    return final_hidden_states
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
def phixtral_mlp_forward(
 | 
			
		||||
    self,
 | 
			
		||||
    x: torch.Tensor,
 | 
			
		||||
) -> torch.Tensor:
 | 
			
		||||
    hidden_states = self.fc1(x)
 | 
			
		||||
    hidden_states = self.act(hidden_states)
 | 
			
		||||
    hidden_states = self.fc2(hidden_states)
 | 
			
		||||
 | 
			
		||||
    return hidden_states
 | 
			
		||||
		Loading…
	
		Reference in a new issue