|
|
||
|---|---|---|
| .. | ||
| aquila | ||
| aquila2 | ||
| baichuan | ||
| baichuan2 | ||
| bluelm | ||
| chatglm2 | ||
| chatglm3 | ||
| chinese-llama2 | ||
| codegeex2 | ||
| codegemma | ||
| codellama | ||
| codeshell | ||
| cohere | ||
| deciLM-7b | ||
| deepseek | ||
| dolly-v1 | ||
| dolly-v2 | ||
| falcon | ||
| flan-t5 | ||
| gemma | ||
| gemma2 | ||
| glm-edge | ||
| glm4 | ||
| gpt-j | ||
| internlm | ||
| internlm2 | ||
| llama2 | ||
| llama3 | ||
| llama3.1 | ||
| llama3.2 | ||
| minicpm | ||
| minicpm3 | ||
| mistral | ||
| mixtral | ||
| mpt | ||
| phi-1_5 | ||
| phi-2 | ||
| phi-3 | ||
| phixtral | ||
| qwen | ||
| qwen1.5 | ||
| qwen2 | ||
| qwen2.5 | ||
| redpajama | ||
| replit | ||
| rwkv4 | ||
| rwkv5 | ||
| solar | ||
| stablelm | ||
| starcoder | ||
| vicuna | ||
| yi | ||
| yuan2 | ||
| README.md | ||
IPEX-LLM Transformers INT4 Optimization for Large Language Model on Intel GPUs
You can use IPEX-LLM to run almost every Huggingface Transformer models with INT4 optimizations on your laptops with Intel GPUs. This directory contains example scripts to help you quickly get started using IPEX-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.