ipex-llm/python/llm/example/GPU/HF-Transformers-AutoModels/Model
2024-04-12 10:15:48 -07:00
..
aquila Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
aquila2 Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
baichuan Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
baichuan2 Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
bluelm Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
chatglm2 Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
chatglm3 Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
chinese-llama2 Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
codellama Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
codeshell Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
deciLM-7b Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
deepseek Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
distil-whisper Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
dolly-v1 Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
dolly-v2 Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
falcon Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
flan-t5 Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
gemma Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
gpt-j Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
internlm fix_internlm-chat-7b-8k repo name in examples (#10747) 2024-04-12 10:15:48 -07:00
internlm2 fix_internlm-chat-7b-8k repo name in examples (#10747) 2024-04-12 10:15:48 -07:00
llama2 Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
mistral Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
mixtral Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
mpt Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
phi-1_5 Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
phi-2 Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
phixtral Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
qwen Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
qwen-vl Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
qwen1.5 Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
redpajama Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
replit Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
rwkv4 Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
rwkv5 Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
solar Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
stablelm Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
starcoder Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
vicuna Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
voiceassistant Upgrade python to 3.11 in Docker Image (#10718) 2024-04-10 14:41:27 +08:00
whisper Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
yi Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
yuan2 Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
README.md Update_document by heyang (#30) 2024-03-25 10:06:02 +08:00

IPEX-LLM Transformers INT4 Optimization for Large Language Model on Intel GPUs

You can use IPEX-LLM to run almost every Huggingface Transformer models with INT4 optimizations on your laptops with Intel GPUs. This directory contains example scripts to help you quickly get started using IPEX-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.