ipex-llm/python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations
Yina Chen d5263e6681 Add awq load support (#9453)
* Support directly loading GPTQ models from huggingface

* fix style

* fix tests

* change example structure

* address comments

* fix style

* init

* address comments

* add examples

* fix style

* fix style

* fix style

* fix style

* update

* remove

* meet comments

* fix style

---------

Co-authored-by: Yang Wang <yang3.wang@intel.com>
2023-11-16 14:06:25 +08:00
..
AWQ Add awq load support (#9453) 2023-11-16 14:06:25 +08:00
GPTQ Support directly loading gptq models from huggingface (#9391) 2023-11-13 20:48:12 -08:00