ipex-llm/python/llm/example/GPU/HuggingFace
Jinhe adfbb9124a
Reorganize MiniCPM-V-2_6 example & update others MiniCPM-V-2 exmaples (#11815)
* model to fp16 & 2_6 reorganize

* revisions

* revisions

* half

* deleted transformer version requirements

* deleted transformer version requirements

---------

Co-authored-by: ivy-lv11 <zhicunlv@gmail.com>
2024-08-16 14:48:56 +08:00
..
Advanced-Quantizations fix gptq of llama (#11749) 2024-08-09 16:39:25 +08:00
LLM Reorganize MiniCPM-V-2_6 example & update others MiniCPM-V-2 exmaples (#11815) 2024-08-16 14:48:56 +08:00
More-Data-Types Update README.md (#11530) 2024-07-08 20:18:41 +08:00
Multimodal Reorganize MiniCPM-V-2_6 example & update others MiniCPM-V-2 exmaples (#11815) 2024-08-16 14:48:56 +08:00
Save-Load Update GPU HF-Transformers example structure (#11526) 2024-07-08 17:58:06 +08:00
README.md Update GPU HF-Transformers example structure (#11526) 2024-07-08 17:58:06 +08:00

Running HuggingFace models using IPEX-LLM on Intel GPU

This folder contains examples of running any HuggingFace model on IPEX-LLM:

  • LLM: examples of running large language models (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) using IPEX-LLM optimizations
  • Multimodal: examples of running large multimodal models (StableDiffusion models, Qwen-VL-Chat, glm-4v, etc.) using IPEX-LLM optimizations
  • More-Data-Types: examples of applying other low bit optimizations (FP8/INT8/FP4, etc.)
  • Save-Load: examples of saving and loading low-bit models
  • Advanced-Quantizations: examples of loading GGUF/AWQ/GPTQ models