ipex-llm/python/llm/example/GPU/HuggingFace
Jinhe 164f47adbd
MiniCPM-V-2 & MiniCPM-Llama3-V-2_5 example updates (#11988)
* minicpm example updates

* --stream
2024-09-03 17:02:06 +08:00
..
Advanced-Quantizations fix gptq of llama (#11749) 2024-08-09 16:39:25 +08:00
LLM change 5 pytorch/huggingface models to fp16 (#11894) 2024-08-22 16:12:09 +08:00
More-Data-Types Update README.md (#11530) 2024-07-08 20:18:41 +08:00
Multimodal MiniCPM-V-2 & MiniCPM-Llama3-V-2_5 example updates (#11988) 2024-09-03 17:02:06 +08:00
Save-Load Update GPU HF-Transformers example structure (#11526) 2024-07-08 17:58:06 +08:00
README.md Update GPU HF-Transformers example structure (#11526) 2024-07-08 17:58:06 +08:00

Running HuggingFace models using IPEX-LLM on Intel GPU

This folder contains examples of running any HuggingFace model on IPEX-LLM:

  • LLM: examples of running large language models (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) using IPEX-LLM optimizations
  • Multimodal: examples of running large multimodal models (StableDiffusion models, Qwen-VL-Chat, glm-4v, etc.) using IPEX-LLM optimizations
  • More-Data-Types: examples of applying other low bit optimizations (FP8/INT8/FP4, etc.)
  • Save-Load: examples of saving and loading low-bit models
  • Advanced-Quantizations: examples of loading GGUF/AWQ/GPTQ models