ipex-llm/python/llm/example/GPU/HF-Transformers-AutoModels/Model
Keyan (Kyrie) Zhang 1e27e08322
Modify example from fp32 to fp16 (#10528)
* Modify example from fp32 to fp16

* Remove Falcon from fp16 example for now

* Remove MPT from fp16 example
2024-04-09 15:45:49 +08:00
..
aquila Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
aquila2 Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
baichuan Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
baichuan2 Modify example from fp32 to fp16 (#10528) 2024-04-09 15:45:49 +08:00
bluelm Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
chatglm2 Modify example from fp32 to fp16 (#10528) 2024-04-09 15:45:49 +08:00
chatglm3 Modify example from fp32 to fp16 (#10528) 2024-04-09 15:45:49 +08:00
chinese-llama2 Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
codellama Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
codeshell Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
deciLM-7b Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
deepseek Update pip install to use --extra-index-url for ipex package (#10557) 2024-03-28 09:56:23 +08:00
distil-whisper Update pip install to use --extra-index-url for ipex package (#10557) 2024-03-28 09:56:23 +08:00
dolly-v1 Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
dolly-v2 Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
falcon Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
flan-t5 Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
gemma Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
gpt-j Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
internlm Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
internlm2 Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
llama2 Modify example from fp32 to fp16 (#10528) 2024-04-09 15:45:49 +08:00
mistral Modify example from fp32 to fp16 (#10528) 2024-04-09 15:45:49 +08:00
mixtral Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
mpt Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
phi-1_5 Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
phi-2 Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
phixtral Replace ipex with ipex-llm (#10554) 2024-03-28 13:54:40 +08:00
qwen Modify example from fp32 to fp16 (#10528) 2024-04-09 15:45:49 +08:00
qwen-vl Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
qwen1.5 Modify example from fp32 to fp16 (#10528) 2024-04-09 15:45:49 +08:00
redpajama Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
replit Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
rwkv4 Replace ipex with ipex-llm (#10554) 2024-03-28 13:54:40 +08:00
rwkv5 Replace ipex with ipex-llm (#10554) 2024-03-28 13:54:40 +08:00
solar Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
stablelm Add GPU and CPU example for stablelm-zephyr-3b (#10643) 2024-04-03 16:28:31 +08:00
starcoder Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
vicuna Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
voiceassistant Update pip install to use --extra-index-url for ipex package (#10557) 2024-03-28 09:56:23 +08:00
whisper Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
yi Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
yuan2 Replace with IPEX-LLM in example comments (#10671) 2024-04-07 13:29:51 +08:00
README.md Update_document by heyang (#30) 2024-03-25 10:06:02 +08:00

IPEX-LLM Transformers INT4 Optimization for Large Language Model on Intel GPUs

You can use IPEX-LLM to run almost every Huggingface Transformer models with INT4 optimizations on your laptops with Intel GPUs. This directory contains example scripts to help you quickly get started using IPEX-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.