[LLM] Enable BIGDL_OPT_IPEX in speculative baichuan2 13b example (#10028)
Enable BIGDL_OPT_IPEX in speculative baichuan2 13b example
This commit is contained in:
parent
226f398c2a
commit
9978089796
3 changed files with 45 additions and 1 deletions
|
|
@ -60,3 +60,40 @@ Tokens generated 128
|
|||
E2E Generation time x.xxxxs
|
||||
First token latency x.xxxxs
|
||||
```
|
||||
|
||||
### 4. Accelerate with BIGDL_OPT_IPEX
|
||||
|
||||
To accelerate speculative decoding on CPU, you can install our validated version of [IPEX 2.3.0+git0c63936](https://github.com/intel/intel-extension-for-pytorch/tree/0c63936d7a6740679987920367ae2e0cdb375b2e) by following steps: (Other versions of IPEX may have some conflicts and can not accelerate speculative decoding correctly.)
|
||||
|
||||
#### 4.1 Download IPEX installation script
|
||||
```bash
|
||||
# Depend on Conda and GCC 12.3
|
||||
wget https://raw.githubusercontent.com/intel/intel-extension-for-pytorch/0c63936d7a6740679987920367ae2e0cdb375b2e/scripts/compile_bundle.sh
|
||||
```
|
||||
|
||||
#### 4.2 Activate your conda environment
|
||||
```bash
|
||||
conda activate <your_conda_env>
|
||||
```
|
||||
#### 4.3 Set VER_IPEX in compile_bundle.sh to 0c63936d7a6740679987920367ae2e0cdb375b2e
|
||||
```bash
|
||||
sed -i 's/VER_IPEX=main/VER_IPEX=0c63936d7a6740679987920367ae2e0cdb375b2e/g' "compile_bundle.sh"
|
||||
```
|
||||
|
||||
#### 4.4 Install IPEX and other dependencies
|
||||
```bash
|
||||
# Install IPEX 2.3.0+git0c63936
|
||||
bash compile_bundle.sh
|
||||
|
||||
# Install other dependencies
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
After installed IPEX, you can set `BIGDL_OPT_IPEX=true` to get target model acceleration. Currently only `Baichuan2 13b` is supported.
|
||||
|
||||
```bash
|
||||
source bigdl-llm-init -t
|
||||
export BIGDL_OPT_IPEX=true
|
||||
export OMP_NUM_THREADS=48 # you can change 48 here to #cores of one processor socket
|
||||
numactl -C 0-47 -m 0 python ./speculative.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT
|
||||
```
|
||||
|
|
|
|||
|
|
@ -0,0 +1,2 @@
|
|||
transformers==4.36.2
|
||||
transformers-stream-generator
|
||||
|
|
@ -59,6 +59,7 @@ if __name__ == '__main__':
|
|||
optimize_model=True,
|
||||
torch_dtype=torch.bfloat16,
|
||||
load_in_low_bit="bf16",
|
||||
torchscript=True,
|
||||
speculative=True,
|
||||
trust_remote_code=True,
|
||||
use_cache=True)
|
||||
|
|
@ -67,11 +68,14 @@ if __name__ == '__main__':
|
|||
|
||||
with torch.inference_mode():
|
||||
prompt = BAICHUAN_PROMPT_FORMAT.format(prompt=args.prompt)
|
||||
input_ids = tokenizer(prompt, return_tensors='pt').input_ids
|
||||
inputs = tokenizer(prompt, return_tensors='pt', padding=True)
|
||||
input_ids = inputs.input_ids.to(model.device)
|
||||
attention_mask = inputs.attention_mask.to(model.device)
|
||||
|
||||
# warmup
|
||||
output = model.generate(input_ids,
|
||||
max_new_tokens=args.n_predict,
|
||||
attention_mask=attention_mask,
|
||||
do_sample=False)
|
||||
output_str = tokenizer.decode(output[0])
|
||||
|
||||
|
|
@ -79,6 +83,7 @@ if __name__ == '__main__':
|
|||
st = time.perf_counter()
|
||||
output = model.generate(input_ids,
|
||||
max_new_tokens=args.n_predict,
|
||||
attention_mask=attention_mask,
|
||||
do_sample=False)
|
||||
output_str = tokenizer.decode(output[0], skip_special_tokens=True)
|
||||
end = time.perf_counter()
|
||||
|
|
|
|||
Loading…
Reference in a new issue