ipex-llm/python/llm/example/GPU/HuggingFace
Chu,Youcheng f463268e36
fix: add run oneAPI instruction for the example of codeshell (#11828)
* fix: delete ipex extension import in ppl wikitext evaluation

* feat: add mixed_precision argument on ppl wikitext evaluation

* fix: delete mix_precision command in perplex evaluation for wikitext

* fix: remove fp16 mixed-presicion argument

* fix: Add a space.

* fix: add run oneAPI instruction for the example of codeshell

* fix: textual adjustments

* fix: Textual adjustment

---------

Co-authored-by: Jinhe Tang <jin.tang1337@gmail.com>
2024-08-16 14:29:06 +08:00
..
Advanced-Quantizations fix gptq of llama (#11749) 2024-08-09 16:39:25 +08:00
LLM fix: add run oneAPI instruction for the example of codeshell (#11828) 2024-08-16 14:29:06 +08:00
More-Data-Types Update README.md (#11530) 2024-07-08 20:18:41 +08:00
Multimodal added minicpm-v-2_6 (#11794) 2024-08-14 16:23:44 +08:00
Save-Load Update GPU HF-Transformers example structure (#11526) 2024-07-08 17:58:06 +08:00
README.md Update GPU HF-Transformers example structure (#11526) 2024-07-08 17:58:06 +08:00

Running HuggingFace models using IPEX-LLM on Intel GPU

This folder contains examples of running any HuggingFace model on IPEX-LLM:

  • LLM: examples of running large language models (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) using IPEX-LLM optimizations
  • Multimodal: examples of running large multimodal models (StableDiffusion models, Qwen-VL-Chat, glm-4v, etc.) using IPEX-LLM optimizations
  • More-Data-Types: examples of applying other low bit optimizations (FP8/INT8/FP4, etc.)
  • Save-Load: examples of saving and loading low-bit models
  • Advanced-Quantizations: examples of loading GGUF/AWQ/GPTQ models