ipex-llm/python/llm/example/CPU/HF-Transformers-AutoModels/Model
Wang, Jian4 9df70d95eb
Refactor bigdl.llm to ipex_llm (#24)
* Rename bigdl/llm to ipex_llm

* rm python/llm/src/bigdl

* from bigdl.llm to from ipex_llm
2024-03-22 15:41:21 +08:00
..
aquila Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
aquila2 Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
baichuan Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
baichuan2 Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
bluelm Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
chatglm Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
chatglm2 Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
chatglm3 Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
codellama Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
codeshell Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
deciLM-7b Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
deepseek Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
deepseek-moe Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
distil-whisper Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
dolly_v1 Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
dolly_v2 Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
falcon Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
flan-t5 Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
fuyu Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
gemma Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
internlm Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
internlm-xcomposer Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
internlm2 Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
llama2 Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
mistral Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
mixtral Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
moss Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
mpt Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
phi-1_5 Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
phi-2 Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
phixtral Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
phoenix Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
qwen Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
qwen-vl Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
qwen1.5 Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
redpajama Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
replit Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
skywork Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
solar Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
starcoder Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
vicuna Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
whisper Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
wizardcoder-python Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
yi Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
yuan2 Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
ziya Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
README.md Update GGUF readme (#9611) 2023-12-06 18:21:54 +08:00

BigDL-LLM Transformers INT4 Optimization for Large Language Model

You can use BigDL-LLM to run any Huggingface Transformer models with INT4 optimizations on either servers or laptops. This directory contains example scripts to help you quickly get started using BigDL-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.

To run the examples, we recommend using Intel® Xeon® processors (server), or >= 12th Gen Intel® Core™ processor (client).

For OS, BigDL-LLM supports Ubuntu 20.04 or later (glibc>=2.17), CentOS 7 or later (glibc>=2.17), and Windows 10/11.

Best Known Configuration on Linux

For better performance, it is recommended to set environment variables on Linux with the help of BigDL-LLM:

pip install bigdl-llm
source bigdl-llm-init