| .. |
|
cli
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
ggml
|
Support q4k in ipex-llm (#10796)
|
2024-04-18 18:55:28 +08:00 |
|
gptq
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
langchain
|
Add tokenizer_id in Langchain (#10588)
|
2024-04-03 14:25:35 +08:00 |
|
llamaindex
|
Llamaindex: add tokenizer_id and support chat (#10590)
|
2024-04-07 13:51:34 +08:00 |
|
serving
|
Replace ipex with ipex-llm (#10554)
|
2024-03-28 13:54:40 +08:00 |
|
transformers
|
Disable sdpa (#10814)
|
2024-04-19 17:33:18 +08:00 |
|
utils
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
vllm
|
Remove not-imported MistralConfig (#10670)
|
2024-04-07 10:32:05 +08:00 |
|
vllm2
|
Add vLLM[xpu] related code (#10779)
|
2024-04-18 15:29:20 +08:00 |
|
__init__.py
|
Update setup.py and add new actions and add compatible mode (#25)
|
2024-03-22 15:44:59 +08:00 |
|
convert_model.py
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
format.sh
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
llm_patching.py
|
Axolotl v0.4.0 support (#10773)
|
2024-04-17 09:49:11 +08:00 |
|
models.py
|
Refactor bigdl.llm to ipex_llm (#24)
|
2024-03-22 15:41:21 +08:00 |
|
optimize.py
|
Add vLLM[xpu] related code (#10779)
|
2024-04-18 15:29:20 +08:00 |