ipex-llm/python/llm/example/CPU/PyTorch-Models/Model
Jin Qiao 10ee786920
Replace with IPEX-LLM in example comments (#10671)
* Replace with IPEX-LLM in example comments

* More replacement

* revert some changes
2024-04-07 13:29:51 +08:00
..
aquila2 Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
bark Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
bert Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
bluelm Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
chatglm Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
chatglm3 Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
codellama Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
codeshell Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
deciLM-7b Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
deepseek Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
deepseek-moe Update_document by heyang (#30) 2024-03-25 10:06:02 +08:00
distil-whisper Update_document by heyang (#30) 2024-03-25 10:06:02 +08:00
flan-t5 Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
fuyu Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
internlm-xcomposer Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
internlm2 Update_document by heyang (#30) 2024-03-25 10:06:02 +08:00
llama2 Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
llava Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
mamba Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
meta-llama Update_document by heyang (#30) 2024-03-25 10:06:02 +08:00
mistral Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
mixtral Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
openai-whisper Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
phi-1_5 Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
phi-2 Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
phixtral Update_document by heyang (#30) 2024-03-25 10:06:02 +08:00
qwen-vl Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
qwen1.5 Update_document by heyang (#30) 2024-03-25 10:06:02 +08:00
skywork Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
solar Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
stablelm Add GPU and CPU example for stablelm-zephyr-3b (#10643) 2024-04-03 16:28:31 +08:00
wizardcoder-python Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
yi Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
yuan2 Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
ziya Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
README.md Update_document by heyang (#30) 2024-03-25 10:06:02 +08:00

IPEX-LLM INT4 Optimization for Large Language Model

You can use optimize_model API to accelerate general PyTorch models on Intel servers and PCs. This directory contains example scripts to help you quickly get started using IPEX-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.

To run the examples, we recommend using Intel® Xeon® processors (server), or >= 12th Gen Intel® Core™ processor (client).

For OS, IPEX-LLM supports Ubuntu 20.04 or later, CentOS 7 or later, and Windows 10/11.

Best Known Configuration on Linux

For better performance, it is recommended to set environment variables on Linux with the help of IPEX-LLM:

pip install ipex-llm
source ipex-llm-init