* Add openai-whisper pytorch gpu * Update README.md * Update README.md * fix typo * fix names update readme * Update README.md |
||
|---|---|---|
| .. | ||
| aquila2 | ||
| baichuan | ||
| baichuan2 | ||
| bark | ||
| bluelm | ||
| chatglm2 | ||
| chatglm3 | ||
| codegeex2 | ||
| codegemma | ||
| codellama | ||
| cohere | ||
| deciLM-7b | ||
| deepseek | ||
| distil-whisper | ||
| dolly-v1 | ||
| dolly-v2 | ||
| flan-t5 | ||
| glm4 | ||
| internlm2 | ||
| llama2 | ||
| llama3 | ||
| llava | ||
| mamba | ||
| minicpm | ||
| mistral | ||
| mixtral | ||
| openai-whisper | ||
| phi-1_5 | ||
| phi-2 | ||
| phi-3 | ||
| phixtral | ||
| qwen-vl | ||
| qwen1.5 | ||
| qwen2 | ||
| replit | ||
| solar | ||
| speech-t5 | ||
| stablelm | ||
| starcoder | ||
| yi | ||
| yuan2 | ||
| README.md | ||
IPEX-LLM INT4 Optimization for Large Language Model on Intel GPUs
You can use optimize_model API to accelerate general PyTorch models on Intel GPUs. This directory contains example scripts to help you quickly get started using IPEX-LLM to run some popular open-source models in the community. Each model has its own dedicated folder, where you can find detailed instructions on how to install and run it.