ipex-llm/python/llm/example/CPU/HF-Transformers-AutoModels/Model
Jinyi Wan b721138132 Add cpu and gpu examples for BlueLM (#9589)
* Add cpu int4 example for BlueLM

* addexample optimize_model cpu for bluelm

* add example gpu int4 blueLM

* add example optimiza_model GPU for bluelm

* Fixing naming issues and BigDL package version.

* Fixing naming issues...

* Add BlueLM in README.md "Verified Models"
2023-12-05 13:59:02 +08:00
..
aquila Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
aquila2 Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
baichuan Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
baichuan2 Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
bluelm Add cpu and gpu examples for BlueLM (#9589) 2023-12-05 13:59:02 +08:00
chatglm Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
chatglm2 Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
chatglm3 Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
codellama Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
codeshell Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
distil-whisper Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
dolly_v1 Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
dolly_v2 Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
falcon Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
flan-t5 Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
fuyu Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
internlm Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
internlm-xcomposer Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
llama2 Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
mistral Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
moss Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
mpt Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
phi-1_5 Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
phoenix Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
qwen Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
qwen-vl Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
redpajama Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
replit Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
skywork Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
starcoder Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
vicuna Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
whisper Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
wizardcoder-python Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
yi Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00
README.md Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00

BigDL-LLM Transformers INT4 Optimization for Large Language Model

You can use BigDL-LLM to run any Huggingface Transformer models with INT4 optimizations on either servers or laptops. This directory contains example scripts to help you quickly get started using BigDL-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.

To run the examples, we recommend using Intel® Xeon® processors (server), or >= 12th Gen Intel® Core™ processor (client).

For OS, BigDL-LLM supports Ubuntu 20.04 or later, CentOS 7 or later, and Windows 10/11.

Best Known Configuration on Linux

For better performance, it is recommended to set environment variables on Linux with the help of BigDL-LLM:

pip install bigdl-llm
source bigdl-llm-init