ipex-llm/python/llm/src/ipex_llm/transformers
Ruonan Wang ac384e0f45
add fp6 mlp fusion (#11032)
* add fp6 fusion

* add qkv fusion for fp6

* remove qkv first
2024-05-15 17:42:50 +08:00
..
awq Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
gguf Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
layers Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
models add fp6 mlp fusion (#11032) 2024-05-15 17:42:50 +08:00
__init__.py Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
bmm.py Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
convert.py fix phi3 (#11022) 2024-05-15 09:32:12 +08:00
convert_ipex.py LLM: Fix bigdl_ipex_int8 warning (#10890) 2024-04-26 11:18:44 +08:00
embedding.py Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
kv.py remove new_layout parameter (#10906) 2024-04-29 10:31:50 +08:00
lisa.py LISA Finetuning Example (#10743) 2024-04-18 13:48:10 +08:00
load_config.yaml Adding load_low_bit interface for ipex_llm_worker (#11000) 2024-05-13 15:30:19 +08:00
loader.py Adding load_low_bit interface for ipex_llm_worker (#11000) 2024-05-13 15:30:19 +08:00
lookup.py Update lookahead strategy (#11021) 2024-05-15 14:48:05 +08:00
low_bit_linear.py Add fp6 support on gpu (#11008) 2024-05-14 16:31:44 +08:00
model.py Add fp6 support on gpu (#11008) 2024-05-14 16:31:44 +08:00
modelling_bigdl.py Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
qlora.py Upgrade Peft version to 0.10.0 for LLM finetune (#10886) 2024-05-07 15:09:14 +08:00
relora.py Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
speculative.py Fix spculative llama3 no stop error (#10963) 2024-05-08 17:09:47 +08:00
training_patch.py Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
utils.py Disable fast fused rope on UHD (#10780) 2024-04-18 10:03:53 +08:00
xpu_customize_fwd.py Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00