ipex-llm/python/llm/example/NPU
2024-08-22 11:09:12 +08:00
..
HF-Transformers-AutoModels Support qwen2-1.5b with fused decoderlayer optimization on NPU (#11888) 2024-08-22 11:09:12 +08:00