* eee * add examples on CPU and GPU * fix * fix * optimize model examples * add Qwen-VL-Chat CPU example * Add Qwen-VL CPU example * fix optimize problem * fix error * Have updated, benchmark fix removed from this PR * add generate API example * Change formats in qwen-vl example * Add CPU transformer int4 example for qwen-vl * fix repo-id problem and add Readme * change picture url * Remove unnecessary file --------- Co-authored-by: Yuwen Hu <yuwen.hu@intel.com>  | 
			||
|---|---|---|
| .. | ||
| aquila | ||
| baichuan | ||
| baichuan2 | ||
| chatglm | ||
| chatglm2 | ||
| dolly_v1 | ||
| dolly_v2 | ||
| falcon | ||
| flan-t5 | ||
| internlm | ||
| llama2 | ||
| mistral | ||
| moss | ||
| mpt | ||
| phi-1_5 | ||
| phoenix | ||
| qwen | ||
| qwen-vl | ||
| redpajama | ||
| replit | ||
| starcoder | ||
| vicuna | ||
| whisper | ||
| README.md | ||
BigDL-LLM Transformers INT4 Optimization for Large Language Model
You can use BigDL-LLM to run any Huggingface Transformer models with INT4 optimizations on either servers or laptops. This directory contains example scripts to help you quickly get started using BigDL-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.
Verified models
| Model | Example | 
|---|---|
| LLaMA | link | 
| LLaMA 2 | link | 
| MPT | link | 
| Falcon | link | 
| ChatGLM | link | 
| ChatGLM2 | link | 
| MOSS | link | 
| Baichuan | link | 
| Baichuan2 | link | 
| Dolly-v1 | link | 
| Dolly-v2 | link | 
| RedPajama | link | 
| Phoenix | link | 
| StarCoder | link | 
| InternLM | link | 
| Whisper | link | 
| Qwen | link | 
| Aquila | link | 
| Replit | link | 
| Mistral | link | 
| Flan-t5 | link | 
| Phi-1_5 | link | 
| Qwen-VL | link | 
Recommended Requirements
To run the examples, we recommend using Intel® Xeon® processors (server), or >= 12th Gen Intel® Core™ processor (client).
For OS, BigDL-LLM supports Ubuntu 20.04 or later, CentOS 7 or later, and Windows 10/11.
Best Known Configuration on Linux
For better performance, it is recommended to set environment variables on Linux with the help of BigDL-Nano:
pip install bigdl-nano
source bigdl-nano-init