| .. |
|
awq
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
gguf
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
layers
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
models
|
add phi-2 optimization (#10843)
|
2024-04-22 18:56:47 +08:00 |
|
__init__.py
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
bmm.py
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
convert.py
|
add phi-2 optimization (#10843)
|
2024-04-22 18:56:47 +08:00 |
|
convert_ipex.py
|
LLM: Fix ipex torchscript=True error (#10832)
|
2024-04-22 15:53:09 +08:00 |
|
embedding.py
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
kv.py
|
optimize starcoder normal kv cache (#10642)
|
2024-04-03 15:27:02 +08:00 |
|
lisa.py
|
LISA Finetuning Example (#10743)
|
2024-04-18 13:48:10 +08:00 |
|
load_config.yaml
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
loader.py
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
lookup.py
|
Fix No module named 'transformers.cache_utils' with transformers < 4.36 (#10835)
|
2024-04-22 14:05:50 +08:00 |
|
low_bit_linear.py
|
Support q4k in ipex-llm (#10796)
|
2024-04-18 18:55:28 +08:00 |
|
model.py
|
LLM: add mixed precision for lm_head (#10795)
|
2024-04-18 19:11:31 +08:00 |
|
modelling_bigdl.py
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
qlora.py
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
relora.py
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
speculative.py
|
Fix No module named 'transformers.cache_utils' with transformers < 4.36 (#10835)
|
2024-04-22 14:05:50 +08:00 |
|
training_patch.py
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
utils.py
|
Disable fast fused rope on UHD (#10780)
|
2024-04-18 10:03:53 +08:00 |
|
xpu_customize_fwd.py
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |