ipex-llm/python/llm/example/CPU/HF-Transformers-AutoModels/Advanced-Quantizations
Wang, Jian4 a54cd767b1 LLM: Add gguf falcon (#9801)
* init falcon

* update convert.py

* update style
2024-01-03 14:49:02 +08:00
..
AWQ Support for Mixtral AWQ (#9775) 2023-12-25 16:08:09 +08:00
GGUF LLM: Add gguf falcon (#9801) 2024-01-03 14:49:02 +08:00
GPTQ Uing bigdl-llm-init instead of bigdl-nano-init (#9558) 2023-11-30 10:10:29 +08:00