ipex-llm/python/llm/example/transformers/transformers_int4
2023-09-19 20:01:33 +08:00
..
baichuan [LLM] Add more transformers int4 examples (Falcon) (#8546) 2023-07-17 17:36:21 +08:00
baichuan2 Update llm readme (#9005) 2023-09-19 20:01:33 +08:00
chatglm LLM: update chatglm example to be more friendly for beginners (#8795) 2023-08-25 10:55:01 +08:00
chatglm2 LLM: add chat & stream chat example for ChatGLM2 transformers int4 (#8636) 2023-08-01 14:57:45 +08:00
dolly_v1 Remove unused example for now (#8538) 2023-07-14 17:32:50 +08:00
dolly_v2 [LLM] Add more transformers int4 example (Dolly v2) (#8571) 2023-07-19 18:20:16 +08:00
falcon [LLM] Transformers int4 example small typo fixes (#8550) 2023-07-17 18:15:32 +08:00
GPU Update optimize_model=True in llama2 chatglm2 arc examples (#8878) 2023-09-05 10:35:37 +08:00
internlm [LLM] Add more transformers int4 example (InternLM) (#8557) 2023-07-19 15:15:38 +08:00
llama2 [LLM] Add more transformers int4 example (Llama 2) (#8602) 2023-07-25 09:21:12 +08:00
moss Remove unused example for now (#8538) 2023-07-14 17:32:50 +08:00
mpt [LLM] Transformers int4 example small typo fixes (#8550) 2023-07-17 18:15:32 +08:00
phoenix [LLM] Add more transformers int4 example (phoenix) (#8520) 2023-07-14 17:58:04 +08:00
qwen LLM: add Qwen transformers int4 example (#8699) 2023-08-08 11:23:09 +08:00
redpajama [LLM] Add more transformers int4 example (RedPajama) (#8523) 2023-07-14 17:30:28 +08:00
starcoder [LLM] Add more transformers int4 example (starcoder) (#8540) 2023-07-17 14:41:19 +08:00
vicuna [LLM] Add more transformers int4 example (vicuna) (#8544) 2023-07-17 16:59:55 +08:00
whisper LLM: Whisper long segment recognize example (#8826) 2023-08-31 16:41:25 +08:00
README.md LLM: add Baichuan2 cpu example (#9002) 2023-09-19 18:08:30 +08:00

BigDL-LLM Transformers INT4 Optimization for Large Language Model

You can use BigDL-LLM to run any Huggingface Transformer models with INT4 optimizations on either servers or laptops. This directory contains example scripts to help you quickly get started using BigDL-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.

Verified models

Model Example
LLaMA link
LLaMA 2 link
MPT link
Falcon link
ChatGLM link
ChatGLM2 link
MOSS link
Baichuan link
Baichuan2 link
Dolly-v1 link
Dolly-v2 link
RedPajama link
Phoenix link
StarCoder link
InternLM link
Whisper link
Qwen link

To run the examples, we recommend using Intel® Xeon® processors (server), or >= 12th Gen Intel® Core™ processor (client).

For OS, BigDL-LLM supports Ubuntu 20.04 or later, CentOS 7 or later, and Windows 10/11.

Best Known Configuration on Linux

For better performance, it is recommended to set environment variables on Linux with the help of BigDL-Nano:

pip install bigdl-nano
source bigdl-nano-init