ipex-llm/python/llm/example
2024-08-08 14:38:30 +08:00
..
CPU upgrade glm-4v example transformers version (#11719) 2024-08-06 14:55:09 +08:00
GPU enable inference mode for deepspeed tp serving (#11742) 2024-08-08 14:38:30 +08:00
NPU/HF-Transformers-AutoModels Switch to conhost when running on NPU (#11687) 2024-07-30 17:08:06 +08:00