ipex-llm/python/llm/src/ipex_llm
2025-02-10 13:25:53 +08:00
..
cli Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
ggml LLM: add new qtype woq_int4 to support gemm int4 temporary. (#12706) 2025-01-15 14:41:33 +08:00
gptq Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
langchain Remove chatglm_C Module to Eliminate LGPL Dependency (#11178) 2024-05-31 17:03:11 +08:00
llamaindex Llamaindex: add tokenizer_id and support chat (#10590) 2024-04-07 13:51:34 +08:00
serving Upgrade to vllm 0.6.2 (#12338) 2024-11-12 20:35:34 +08:00
transformers fix qwen2 vl (#12798) 2025-02-10 13:25:53 +08:00
utils Add benchmark_util for transformers >= 4.47.0 (#12644) 2025-01-03 10:48:29 +08:00
vllm vLLM: Update vLLM-cpu to v0.6.6-post1 (#12728) 2025-01-22 15:03:01 +08:00
__init__.py IPEX Duplicate importer V2 (#11310) 2024-06-19 16:29:19 +08:00
convert_model.py Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
format.sh Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
llm_patching.py Upgrade Peft version to 0.10.0 for LLM finetune (#10886) 2024-05-07 15:09:14 +08:00
models.py Remove chatglm_C Module to Eliminate LGPL Dependency (#11178) 2024-05-31 17:03:11 +08:00
optimize.py add disable opts for awq (#12641) 2025-01-02 15:45:22 +08:00