ipex-llm/python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations
dingbaorong a7bc89b3a1 remove q4_1 in gguf example (#9610)
* remove q4_1

* fixes
2023-12-06 16:00:05 +08:00
..
AWQ LLM: support Mistral AWQ models (#9520) 2023-11-24 16:20:22 +08:00
GGUF remove q4_1 in gguf example (#9610) 2023-12-06 16:00:05 +08:00
GPTQ Support directly loading gptq models from huggingface (#9391) 2023-11-13 20:48:12 -08:00