ipex-llm/python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations
ZehuaCao e76d984164 [LLM] Support llm-awq vicuna-7b-1.5 on arc (#9874)
* support llm-awq vicuna-7b-1.5 on arc

* support llm-awq vicuna-7b-1.5 on arc
2024-01-10 14:28:39 +08:00
..
AWQ [LLM] Support llm-awq vicuna-7b-1.5 on arc (#9874) 2024-01-10 14:28:39 +08:00
GGUF Update llm gpu xpu default related info to PyTorch 2.1 (#9866) 2024-01-09 15:38:47 +08:00
GPTQ Update llm gpu xpu default related info to PyTorch 2.1 (#9866) 2024-01-09 15:38:47 +08:00