ipex-llm/python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations
dingbaorong 89069d6173 Add gpu gguf example (#9603)
* add gpu gguf example

* some fixes

* address kai's comments

* address json's comments
2023-12-06 15:17:54 +08:00
..
AWQ LLM: support Mistral AWQ models (#9520) 2023-11-24 16:20:22 +08:00
GGUF Add gpu gguf example (#9603) 2023-12-06 15:17:54 +08:00
GPTQ Support directly loading gptq models from huggingface (#9391) 2023-11-13 20:48:12 -08:00