ipex-llm/python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations
Wang, Jian4 a54cd767b1 LLM: Add gguf falcon (#9801)
* init falcon

* update convert.py

* update style
2024-01-03 14:49:02 +08:00
..
AWQ Support for Mixtral AWQ (#9775) 2023-12-25 16:08:09 +08:00
GGUF LLM: Add gguf falcon (#9801) 2024-01-03 14:49:02 +08:00
GPTQ Revert "[LLM] IPEX auto importer turn on by default for XPU (#9730)" (#9759) 2023-12-22 16:38:24 +08:00