ipex-llm/python/llm/src/ipex_llm/transformers
Kai Huang c8679ad592
Qwen layernorm as input (#12309)
* qwen layernorm as input

* add group size
2024-11-04 09:51:15 +08:00
..
awq Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
gguf IPEX Duplicate importer V2 (#11310) 2024-06-19 16:29:19 +08:00
layers Divide core-xe packages (#11131) 2024-05-28 12:00:18 +08:00
models fix qwen2 attention_mask slice (#12307) 2024-10-31 17:00:05 +08:00
npu_models add npu_group_size for transformers_int4_npu_win in all-in-one benchmark api (#12316) 2024-11-01 18:44:27 +08:00
npu_pipeline_model Qwen layernorm as input (#12309) 2024-11-04 09:51:15 +08:00
__init__.py Refactor fastapi-serving and add one card serving(#11581) 2024-07-17 11:12:43 +08:00
bmm.py Divide core-xe packages (#11131) 2024-05-28 12:00:18 +08:00
convert.py Codegeex support (#12303) 2024-10-31 15:28:56 +08:00
convert_ipex.py LLM: Fix bigdl_ipex_int8 warning (#10890) 2024-04-26 11:18:44 +08:00
embedding.py add save_low_bit support for DiskEmbedding (#11621) 2024-07-19 10:34:53 +08:00
kv.py add basic support for llama3.2 (#12125) 2024-09-26 15:46:19 +08:00
lisa.py LISA Finetuning Example (#10743) 2024-04-18 13:48:10 +08:00
load_config.yaml Adding load_low_bit interface for ipex_llm_worker (#11000) 2024-05-13 15:30:19 +08:00
loader.py Add half precision for fastchat models (#11130) 2024-05-24 15:41:14 +08:00
lookup.py Performance mode strategy update for input_embeds input (#11997) 2024-09-03 17:46:16 +08:00
low_bit_linear.py bugfix for qlora finetuning on GPU (#12298) 2024-10-30 16:54:10 +08:00
model.py Patch sdpa check function in specific module attributes table (#12285) 2024-10-29 18:41:09 +08:00
modelling_bigdl.py Remove chatglm_C Module to Eliminate LGPL Dependency (#11178) 2024-05-31 17:03:11 +08:00
npu_model.py [NPU pipeline] Support save & load and update examples (#12293) 2024-10-30 10:02:00 +08:00
patches.py Patch sdpa check function in specific module attributes table (#12285) 2024-10-29 18:41:09 +08:00
pipeline_parallel.py Add lightweight-serving whisper asr example (#11847) 2024-08-22 15:46:28 +08:00
qlora.py Upgrade Peft version to 0.10.0 for LLM finetune (#10886) 2024-05-07 15:09:14 +08:00
relora.py Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
speculative.py support chatglm4 in lookup (#11855) 2024-08-21 15:53:17 +08:00
streamer.py [LLM]Reopen autotp generate_stream (#11120) 2024-05-24 17:16:14 +08:00
training_patch.py Fix error during merging adapter (#11145) 2024-05-27 19:41:42 +08:00
utils.py deepspeed zero3 QLoRA finetuning (#11625) 2024-08-13 16:15:29 +08:00
xpu_customize_fwd.py Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00