* fix: delete ipex extension import in ppl wikitext evaluation * feat: add mixed_precision argument on ppl wikitext evaluation * fix: delete mix_precision command in perplex evaluation for wikitext * fix: remove fp16 mixed-presicion argument * fix: Add a space. * fix: add run oneAPI instruction for the example of codeshell * fix: textual adjustments * fix: Textual adjustment --------- Co-authored-by: Jinhe Tang <jin.tang1337@gmail.com> |
||
|---|---|---|
| .. | ||
| Advanced-Quantizations | ||
| LLM | ||
| More-Data-Types | ||
| Multimodal | ||
| Save-Load | ||
| README.md | ||
Running HuggingFace models using IPEX-LLM on Intel GPU
This folder contains examples of running any HuggingFace model on IPEX-LLM:
- LLM: examples of running large language models (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) using IPEX-LLM optimizations
- Multimodal: examples of running large multimodal models (StableDiffusion models, Qwen-VL-Chat, glm-4v, etc.) using IPEX-LLM optimizations
- More-Data-Types: examples of applying other low bit optimizations (FP8/INT8/FP4, etc.)
- Save-Load: examples of saving and loading low-bit models
- Advanced-Quantizations: examples of loading GGUF/AWQ/GPTQ models