| .. |
|
awq
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
gguf
|
IPEX Duplicate importer V2 (#11310)
|
2024-06-19 16:29:19 +08:00 |
|
layers
|
Divide core-xe packages (#11131)
|
2024-05-28 12:00:18 +08:00 |
|
models
|
fix qwen2 vl again (#12174)
|
2024-10-10 13:50:01 +08:00 |
|
npu_models
|
optimize npu qwen2 (#12107)
|
2024-09-20 19:46:16 +08:00 |
|
npu_pipeline_model
|
update NPU pipeline generate (#12182)
|
2024-10-11 17:39:20 +08:00 |
|
__init__.py
|
Refactor fastapi-serving and add one card serving(#11581)
|
2024-07-17 11:12:43 +08:00 |
|
bmm.py
|
Divide core-xe packages (#11131)
|
2024-05-28 12:00:18 +08:00 |
|
convert.py
|
fix qwen2 vl again (#12174)
|
2024-10-10 13:50:01 +08:00 |
|
convert_ipex.py
|
LLM: Fix bigdl_ipex_int8 warning (#10890)
|
2024-04-26 11:18:44 +08:00 |
|
embedding.py
|
add save_low_bit support for DiskEmbedding (#11621)
|
2024-07-19 10:34:53 +08:00 |
|
kv.py
|
add basic support for llama3.2 (#12125)
|
2024-09-26 15:46:19 +08:00 |
|
lisa.py
|
LISA Finetuning Example (#10743)
|
2024-04-18 13:48:10 +08:00 |
|
load_config.yaml
|
Adding load_low_bit interface for ipex_llm_worker (#11000)
|
2024-05-13 15:30:19 +08:00 |
|
loader.py
|
Add half precision for fastchat models (#11130)
|
2024-05-24 15:41:14 +08:00 |
|
lookup.py
|
Performance mode strategy update for input_embeds input (#11997)
|
2024-09-03 17:46:16 +08:00 |
|
low_bit_linear.py
|
Switching from vLLM v0.3.3 to vLLM 0.5.4 (#12042)
|
2024-09-10 15:37:43 +08:00 |
|
model.py
|
Update all-in-one benchmark prompts for continuation task & lookup update for minicpmv (#11827)
|
2024-08-16 17:16:35 +08:00 |
|
modelling_bigdl.py
|
Remove chatglm_C Module to Eliminate LGPL Dependency (#11178)
|
2024-05-31 17:03:11 +08:00 |
|
npu_model.py
|
[NPU] Add mixed_precision for Qwen2 7B (#12098)
|
2024-09-20 16:36:21 +08:00 |
|
pipeline_parallel.py
|
Add lightweight-serving whisper asr example (#11847)
|
2024-08-22 15:46:28 +08:00 |
|
qlora.py
|
Upgrade Peft version to 0.10.0 for LLM finetune (#10886)
|
2024-05-07 15:09:14 +08:00 |
|
relora.py
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
speculative.py
|
support chatglm4 in lookup (#11855)
|
2024-08-21 15:53:17 +08:00 |
|
streamer.py
|
[LLM]Reopen autotp generate_stream (#11120)
|
2024-05-24 17:16:14 +08:00 |
|
training_patch.py
|
Fix error during merging adapter (#11145)
|
2024-05-27 19:41:42 +08:00 |
|
utils.py
|
deepspeed zero3 QLoRA finetuning (#11625)
|
2024-08-13 16:15:29 +08:00 |
|
xpu_customize_fwd.py
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |