ipex-llm/python/llm/src/ipex_llm/transformers/npu_models
2024-10-30 17:21:47 +08:00
..
__init__.py optimize llama npu perf (#11426) 2024-06-25 17:43:20 +08:00
baichuan.py fix baichuan (#11606) 2024-07-18 09:43:36 +08:00
baichuan_mp.py Support baichuan2 for level0 pipeline (#12289) 2024-10-29 19:24:16 +08:00
chatglm.py fix chatglm3 npu output (#11590) 2024-07-16 18:16:30 +08:00
chatglm4.py support npu glm4 (#11539) 2024-07-09 15:46:49 +08:00
common.py [NPU pipeline] Support save & load and update examples (#12293) 2024-10-30 10:02:00 +08:00
convert.py Initial support for quantized forward on CPU when quantization_group_size=0 (#12282) 2024-10-29 19:40:17 +08:00
convert_mp.py Support minicpm-1B in level0 pipeline (#12297) 2024-10-30 17:21:47 +08:00
kv.py Initial NPU support for MiniCPM-V-2_6 (#11966) 2024-08-30 16:34:35 +08:00
linear.py Initial support for quantized forward on CPU when quantization_group_size=0 (#12282) 2024-10-29 19:40:17 +08:00
llama.py remove obselete npu code (#11967) 2024-08-29 14:16:44 -07:00
llama_mp.py Groupwise prefill optimization (#12291) 2024-10-30 14:59:45 +08:00
lm_head.py [NPU] Groupwise (#12241) 2024-10-23 14:10:58 +08:00
minicpm.py add minicpm 1B/2B npu support (#11507) 2024-07-04 16:31:04 +08:00
minicpm_mp.py Support minicpm-1B in level0 pipeline (#12297) 2024-10-30 17:21:47 +08:00
mistral.py add mistral npu support (#11523) 2024-07-08 13:17:15 +08:00
mp_models_base.py Groupwise prefill optimization (#12291) 2024-10-30 14:59:45 +08:00
paraformer_mp.py [NPU] Add support for loading a FunASR model (#12073) 2024-10-25 17:22:01 +08:00
phi3.py add npu sdp (#11562) 2024-07-11 16:57:35 +08:00
phi3_v.py optimize phi3-v encoder npu performance and add multimodal example (#11553) 2024-07-11 13:59:14 +08:00
qwen2.py add qwen2 npu support (#11504) 2024-07-04 11:01:25 +08:00
qwen2_mp.py Groupwise prefill optimization (#12291) 2024-10-30 14:59:45 +08:00
stablelm.py Optimize stablelm on NPU (#11512) 2024-07-05 14:21:57 +08:00