ipex-llm/python/llm/example
Ruonan Wang 7e917d6cfb
fix gptq of llama (#11749)
* fix gptq of llama

* small fix
2024-08-09 16:39:25 +08:00
..
CPU upgrade glm-4v example transformers version (#11719) 2024-08-06 14:55:09 +08:00
GPU fix gptq of llama (#11749) 2024-08-09 16:39:25 +08:00
NPU/HF-Transformers-AutoModels Switch to conhost when running on NPU (#11687) 2024-07-30 17:08:06 +08:00