ipex-llm/python/llm/example/CPU/PyTorch-Models/Model
Jin Qiao 1f876fd837
Add example for phi-3 (#10881)
* Add example for phi-3

* add in readme and index

* fix

* fix

* fix

* fix indent

* fix
2024-04-29 16:43:55 +08:00
..
aquila2 Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
bark Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
bert Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
bluelm Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
chatglm update chatglm readme (#10659) 2024-04-09 14:24:46 -07:00
chatglm3 Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
codellama Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
codeshell Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
deciLM-7b Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
deepseek Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
deepseek-moe Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
distil-whisper Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
flan-t5 Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
fuyu Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
internlm-xcomposer Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
internlm2 fix_internlm-chat-7b-8k repo name in examples (#10747) 2024-04-12 10:15:48 -07:00
llama2 Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
llama3 Fix the not stop issue of llama3 examples (#10860) 2024-04-23 19:10:09 +08:00
llava Fix llava example to support transformerds 4.36 (#10614) 2024-04-09 13:47:07 -07:00
mamba Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
meta-llama Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
mistral Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
mixtral Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
openai-whisper Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
phi-1_5 Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
phi-2 Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
phi-3 Add example for phi-3 (#10881) 2024-04-29 16:43:55 +08:00
phixtral Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
qwen-vl Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
qwen1.5 LLM:Add qwen moe example libs md (#10828) 2024-04-22 10:03:19 +08:00
skywork Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
solar Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
stablelm Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
wizardcoder-python Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
yi Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
yuan2 Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
ziya Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
README.md Update_document by heyang (#30) 2024-03-25 10:06:02 +08:00

IPEX-LLM INT4 Optimization for Large Language Model

You can use optimize_model API to accelerate general PyTorch models on Intel servers and PCs. This directory contains example scripts to help you quickly get started using IPEX-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.

To run the examples, we recommend using Intel® Xeon® processors (server), or >= 12th Gen Intel® Core™ processor (client).

For OS, IPEX-LLM supports Ubuntu 20.04 or later, CentOS 7 or later, and Windows 10/11.

Best Known Configuration on Linux

For better performance, it is recommended to set environment variables on Linux with the help of IPEX-LLM:

pip install ipex-llm
source ipex-llm-init