ipex-llm/python/llm/example/CPU/PyTorch-Models/Model
Wang, Jian4 9df70d95eb
Refactor bigdl.llm to ipex_llm (#24)
* Rename bigdl/llm to ipex_llm

* rm python/llm/src/bigdl

* from bigdl.llm to from ipex_llm
2024-03-22 15:41:21 +08:00
..
aquila2 Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
bark Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
bert Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
bluelm Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
chatglm Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
chatglm3 Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
codellama Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
codeshell Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
deciLM-7b Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
deepseek Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
deepseek-moe Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
distil-whisper Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
flan-t5 Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
fuyu Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
internlm-xcomposer Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
internlm2 Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
llama2 Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
llava Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
mamba Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
meta-llama Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
mistral Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
mixtral Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
openai-whisper Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
phi-1_5 Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
phi-2 Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
phixtral Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
qwen-vl Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
qwen1.5 Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
skywork Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
solar Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
wizardcoder-python Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
yi Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
yuan2 Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
ziya Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
README.md Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00

BigDL-LLM INT4 Optimization for Large Language Model

You can use optimize_model API to accelerate general PyTorch models on Intel servers and PCs. This directory contains example scripts to help you quickly get started using BigDL-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.

To run the examples, we recommend using Intel® Xeon® processors (server), or >= 12th Gen Intel® Core™ processor (client).

For OS, BigDL-LLM supports Ubuntu 20.04 or later, CentOS 7 or later, and Windows 10/11.

Best Known Configuration on Linux

For better performance, it is recommended to set environment variables on Linux with the help of BigDL-LLM:

pip install bigdl-llm
source bigdl-llm-init