| .. |
|
awq
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
gguf
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
layers
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
models
|
Use new sdp again (#11025)
|
2024-05-16 09:33:34 +08:00 |
|
__init__.py
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
bmm.py
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
convert.py
|
[WIP] Support llama2 with transformers==4.38.0 (#11024)
|
2024-05-15 18:07:00 +08:00 |
|
convert_ipex.py
|
LLM: Fix bigdl_ipex_int8 warning (#10890)
|
2024-04-26 11:18:44 +08:00 |
|
embedding.py
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
kv.py
|
remove new_layout parameter (#10906)
|
2024-04-29 10:31:50 +08:00 |
|
lisa.py
|
LISA Finetuning Example (#10743)
|
2024-04-18 13:48:10 +08:00 |
|
load_config.yaml
|
Adding load_low_bit interface for ipex_llm_worker (#11000)
|
2024-05-13 15:30:19 +08:00 |
|
loader.py
|
Adding load_low_bit interface for ipex_llm_worker (#11000)
|
2024-05-13 15:30:19 +08:00 |
|
lookup.py
|
Update lookahead strategy (#11021)
|
2024-05-15 14:48:05 +08:00 |
|
low_bit_linear.py
|
Support fp6 save & load (#11034)
|
2024-05-15 17:52:02 +08:00 |
|
model.py
|
Add fp6 support on gpu (#11008)
|
2024-05-14 16:31:44 +08:00 |
|
modelling_bigdl.py
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
qlora.py
|
Upgrade Peft version to 0.10.0 for LLM finetune (#10886)
|
2024-05-07 15:09:14 +08:00 |
|
relora.py
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
speculative.py
|
Fix spculative llama3 no stop error (#10963)
|
2024-05-08 17:09:47 +08:00 |
|
training_patch.py
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
utils.py
|
Disable fast fused rope on UHD (#10780)
|
2024-04-18 10:03:53 +08:00 |
|
xpu_customize_fwd.py
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |