ipex-llm/python/llm/example/GPU/HF-Transformers-AutoModels/Advanced-Quantizations
2024-02-22 17:22:59 +08:00
..
AWQ [LLM] Support llm-awq vicuna-7b-1.5 on arc (#9874) 2024-01-10 14:28:39 +08:00
GGUF Fix Mixtral GGUF Wrong Output Issue (#9930) 2024-01-18 14:11:27 +08:00
GGUF-IQ2 Update README.md (#10213) 2024-02-22 17:22:59 +08:00
GPTQ Update llm gpu xpu default related info to PyTorch 2.1 (#9866) 2024-01-09 15:38:47 +08:00