| .. |
|
awq
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
gguf
|
IPEX Duplicate importer V2 (#11310)
|
2024-06-19 16:29:19 +08:00 |
|
layers
|
Divide core-xe packages (#11131)
|
2024-05-28 12:00:18 +08:00 |
|
models
|
Support MiniCPM-V-2_6 multi-modal benchmarking with latency text streamer (#11963)
|
2024-08-29 19:22:09 +08:00 |
|
npu_models
|
Revert prefill logic of qwen2-7b (#11992)
|
2024-09-03 14:45:01 +08:00 |
|
__init__.py
|
Refactor fastapi-serving and add one card serving(#11581)
|
2024-07-17 11:12:43 +08:00 |
|
bmm.py
|
Divide core-xe packages (#11131)
|
2024-05-28 12:00:18 +08:00 |
|
convert.py
|
Support MiniCPM-V-2_6 multi-modal benchmarking with latency text streamer (#11963)
|
2024-08-29 19:22:09 +08:00 |
|
convert_ipex.py
|
LLM: Fix bigdl_ipex_int8 warning (#10890)
|
2024-04-26 11:18:44 +08:00 |
|
embedding.py
|
add save_low_bit support for DiskEmbedding (#11621)
|
2024-07-19 10:34:53 +08:00 |
|
kv.py
|
optimize phi3 memory usage (#11867)
|
2024-08-20 17:32:51 +08:00 |
|
lisa.py
|
LISA Finetuning Example (#10743)
|
2024-04-18 13:48:10 +08:00 |
|
load_config.yaml
|
Adding load_low_bit interface for ipex_llm_worker (#11000)
|
2024-05-13 15:30:19 +08:00 |
|
loader.py
|
Add half precision for fastchat models (#11130)
|
2024-05-24 15:41:14 +08:00 |
|
lookup.py
|
Fix wrong attention mask and garbage output for inputs_embeds inputs during lookup generation (#11989)
|
2024-09-02 19:09:12 +08:00 |
|
low_bit_linear.py
|
Fix vLLM not convert issues (#11817)
|
2024-08-15 19:04:05 +08:00 |
|
model.py
|
Update all-in-one benchmark prompts for continuation task & lookup update for minicpmv (#11827)
|
2024-08-16 17:16:35 +08:00 |
|
modelling_bigdl.py
|
Remove chatglm_C Module to Eliminate LGPL Dependency (#11178)
|
2024-05-31 17:03:11 +08:00 |
|
npu_model.py
|
Initial NPU support for MiniCPM-V-2_6 (#11966)
|
2024-08-30 16:34:35 +08:00 |
|
pipeline_parallel.py
|
Add lightweight-serving whisper asr example (#11847)
|
2024-08-22 15:46:28 +08:00 |
|
qlora.py
|
Upgrade Peft version to 0.10.0 for LLM finetune (#10886)
|
2024-05-07 15:09:14 +08:00 |
|
relora.py
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
speculative.py
|
support chatglm4 in lookup (#11855)
|
2024-08-21 15:53:17 +08:00 |
|
streamer.py
|
[LLM]Reopen autotp generate_stream (#11120)
|
2024-05-24 17:16:14 +08:00 |
|
training_patch.py
|
Fix error during merging adapter (#11145)
|
2024-05-27 19:41:42 +08:00 |
|
utils.py
|
deepspeed zero3 QLoRA finetuning (#11625)
|
2024-08-13 16:15:29 +08:00 |
|
xpu_customize_fwd.py
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |