ipex-llm/python/llm/example/CPU/HF-Transformers-AutoModels/Model
Chu,Youcheng ce6fcaa9ba
update transformers version in example of glm4 (#12453)
* fix: update transformers version in example of glm4

* fix: textual adjustments

* fix: texual adjustment
2024-11-27 15:02:25 +08:00
..
aquila Miniconda/Anaconda -> Miniforge update in examples (#11194) 2024-06-04 10:14:02 +08:00
aquila2 Miniconda/Anaconda -> Miniforge update in examples (#11194) 2024-06-04 10:14:02 +08:00
baichuan LLM: Modify CPU Installation Command for most examples (#11049) 2024-05-17 15:52:20 +08:00
baichuan2 LLM: Modify CPU Installation Command for most examples (#11049) 2024-05-17 15:52:20 +08:00
bluelm
chatglm Miniconda/Anaconda -> Miniforge update in examples (#11194) 2024-06-04 10:14:02 +08:00
chatglm2
chatglm3 ChatGLM Examples Restructure regarding Installation Steps (#11285) 2024-06-14 12:37:05 +08:00
codegeex2 Fix codegeex2 transformers version (#11487) 2024-07-02 15:09:28 +08:00
codegemma fix gemma for 4.41 (#11531) 2024-07-18 15:02:50 -07:00
codellama LLM: Modify CPU Installation Command for most examples (#11049) 2024-05-17 15:52:20 +08:00
codeshell Miniconda/Anaconda -> Miniforge update in examples (#11194) 2024-06-04 10:14:02 +08:00
cohere Fix cohere model on transformers>=4.41 (#11575) 2024-07-17 17:18:59 -07:00
deciLM-7b LLM: Modify CPU Installation Command for most examples (#11049) 2024-05-17 15:52:20 +08:00
deepseek
deepseek-moe Miniconda/Anaconda -> Miniforge update in examples (#11194) 2024-06-04 10:14:02 +08:00
distil-whisper
dolly_v1 LLM: Modify CPU Installation Command for most examples (#11049) 2024-05-17 15:52:20 +08:00
dolly_v2
falcon refactor ot remove old rope usage (#12224) 2024-10-17 17:06:09 +08:00
flan-t5
fuyu Miniconda/Anaconda -> Miniforge update in examples (#11194) 2024-06-04 10:14:02 +08:00
gemma fix gemma for 4.41 (#11531) 2024-07-18 15:02:50 -07:00
glm-4v Limit trl version in example (#12332) 2024-11-05 14:50:10 +08:00
glm4 update transformers version in example of glm4 (#12453) 2024-11-27 15:02:25 +08:00
internlm LLM: Modify CPU Installation Command for most examples (#11049) 2024-05-17 15:52:20 +08:00
internlm-xcomposer
internlm2 fix 1482 (#11661) 2024-07-26 12:39:09 -07:00
llama2 LLM: Modify CPU Installation Command for most examples (#11049) 2024-05-17 15:52:20 +08:00
llama3
llama3.1 Limit trl version in example (#12332) 2024-11-05 14:50:10 +08:00
minicpm fix minicpm for transformers>=4.39 (#11533) 2024-07-18 15:01:57 -07:00
minicpm-v-2 added minicpm cpu examples (#12027) 2024-09-11 15:51:21 +08:00
minicpm-v-2_6 Limit trl version in example (#12332) 2024-11-05 14:50:10 +08:00
mistral
mixtral Miniconda/Anaconda -> Miniforge update in examples (#11194) 2024-06-04 10:14:02 +08:00
moss LLM: Modify CPU Installation Command for most examples (#11049) 2024-05-17 15:52:20 +08:00
mpt
phi-1_5
phi-2
phi-3
phi-3-vision phi-3 on "transformers>=4.37.0,<=4.42.3" (#11534) 2024-07-17 17:19:57 -07:00
phixtral Miniconda/Anaconda -> Miniforge update in examples (#11194) 2024-06-04 10:14:02 +08:00
phoenix
qwen
qwen-vl
qwen1.5 LLM: Modify CPU Installation Command for most examples (#11049) 2024-05-17 15:52:20 +08:00
qwen2
redpajama
replit Miniconda/Anaconda -> Miniforge update in examples (#11194) 2024-06-04 10:14:02 +08:00
skywork LLM: Modify CPU Installation Command for most examples (#11049) 2024-05-17 15:52:20 +08:00
solar LLM: Modify CPU Installation Command for most examples (#11049) 2024-05-17 15:52:20 +08:00
stablelm
starcoder LLM: Modify CPU Installation Command for most examples (#11049) 2024-05-17 15:52:20 +08:00
vicuna LLM: Modify CPU Installation Command for most examples (#11049) 2024-05-17 15:52:20 +08:00
whisper LLM: Modify CPU Installation Command for most examples (#11049) 2024-05-17 15:52:20 +08:00
wizardcoder-python LLM: Modify CPU Installation Command for most examples (#11049) 2024-05-17 15:52:20 +08:00
yi
yuan2 Miniconda/Anaconda -> Miniforge update in examples (#11194) 2024-06-04 10:14:02 +08:00
ziya Miniconda/Anaconda -> Miniforge update in examples (#11194) 2024-06-04 10:14:02 +08:00
README.md

IPEX-LLM Transformers INT4 Optimization for Large Language Model

You can use IPEX-LLM to run any Huggingface Transformer models with INT4 optimizations on either servers or laptops. This directory contains example scripts to help you quickly get started using IPEX-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.

To run the examples, we recommend using Intel® Xeon® processors (server), or >= 12th Gen Intel® Core™ processor (client).

For OS, IPEX-LLM supports Ubuntu 20.04 or later (glibc>=2.17), CentOS 7 or later (glibc>=2.17), and Windows 10/11.

Best Known Configuration on Linux

For better performance, it is recommended to set environment variables on Linux with the help of IPEX-LLM:

pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
source ipex-llm-init