ipex-llm/python/llm/example/CPU/HF-Transformers-AutoModels/Model
Zheng, Yi 0674146cfb Add cpu and gpu examples of distil-whisper (#9374)
* Add distil-whisper examples

* Fixes based on comments

* Minor fixes

---------

Co-authored-by: Ariadne330 <wyn2000330@126.com>
2023-11-10 16:09:55 +08:00
..
aquila LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
aquila2 LLM: add aquila2 model example (#9356) 2023-11-06 15:47:39 +08:00
baichuan LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
baichuan2 LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
chatglm LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
chatglm2 LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
chatglm3 LLM: add chatglm3 examples (#9305) 2023-11-01 09:50:05 +08:00
codellama LLM: add CodeLlama CPU and GPU examples (#9338) 2023-11-02 15:34:25 +08:00
codeshell add CodeShell CPU example (#9345) 2023-11-03 13:15:54 +08:00
distil-whisper Add cpu and gpu examples of distil-whisper (#9374) 2023-11-10 16:09:55 +08:00
dolly_v1 LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
dolly_v2 LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
falcon add position_ids and fuse embedding for falcon (#9242) 2023-10-24 09:58:20 +08:00
flan-t5 add cpu and gpu examples of flan-t5 (#9171) 2023-10-24 15:24:01 +08:00
fuyu Add CPU examples of fuyu (#9393) 2023-11-09 15:29:19 +08:00
internlm LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
internlm-xcomposer Add internlm_xcomposer cpu examples (#9337) 2023-11-02 15:50:02 +08:00
llama2 LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
mistral LLM: highlight transformers version requirement in mistral examples (#9380) 2023-11-08 16:05:03 +08:00
moss LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
mpt LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
phi-1_5 phi-1_5 CPU and GPU examples (#9173) 2023-10-24 15:08:04 +08:00
phoenix LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
qwen LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
qwen-vl use coco image in Qwen-VL (#9298) 2023-10-30 14:32:35 +08:00
redpajama LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
replit LLM: Add Replit CPU and GPU example (#9028) 2023-10-12 13:42:14 +08:00
skywork Add cpu examples of skywork (#9340) 2023-11-02 15:10:45 +08:00
starcoder LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
vicuna LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
whisper LLM: update example layout (#9046) 2023-10-09 15:36:39 +08:00
wizardcoder-python Add cpu examples of WizardCoder (#9344) 2023-11-02 20:22:43 +08:00
README.md LLM: add chatglm3 examples (#9305) 2023-11-01 09:50:05 +08:00

BigDL-LLM Transformers INT4 Optimization for Large Language Model

You can use BigDL-LLM to run any Huggingface Transformer models with INT4 optimizations on either servers or laptops. This directory contains example scripts to help you quickly get started using BigDL-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.

To run the examples, we recommend using Intel® Xeon® processors (server), or >= 12th Gen Intel® Core™ processor (client).

For OS, BigDL-LLM supports Ubuntu 20.04 or later, CentOS 7 or later, and Windows 10/11.

Best Known Configuration on Linux

For better performance, it is recommended to set environment variables on Linux with the help of BigDL-Nano:

pip install bigdl-nano
source bigdl-nano-init