ipex-llm/python/llm/example/GPU/HF-Transformers-AutoModels
Shaojun Liu ab9f7f3ac5
FIX: Qwen1.5-GPTQ-Int4 inference error (#11432)
* merge_qkv if quant_method is 'gptq'

* fix python style checks

* refactor

* update GPU example
2024-06-26 15:36:22 +08:00
..
Advanced-Quantizations FIX: Qwen1.5-GPTQ-Int4 inference error (#11432) 2024-06-26 15:36:22 +08:00
Model Add GLM-4V example (#11343) 2024-06-21 12:54:31 +08:00
More-Data-Types Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
Save-Load Remove oneAPI pip install command in related examples (#11030) 2024-05-16 10:46:29 +08:00
README.md Update readme (#10665) 2024-04-05 18:01:57 +08:00

Running HuggingFace transformers model using IPEX-LLM on Intel GPU

This folder contains examples of running any HuggingFace transformers model on IPEX-LLM (using the standard AutoModel APIs):

  • Model: examples of running HuggingFace transformers models (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) using INT4 optimizations
  • More-Data-Types: examples of applying other low bit optimizations (FP8/INT8/FP4, etc.)
  • Save-Load: examples of saving and loading low-bit models
  • Advanced-Quantizations: examples of loading GGUF/AWQ/GPTQ models