ipex-llm/python/llm/src/ipex_llm
Ruonan Wang 310f18c8af
update NPU pipeline generate (#12182)
* update

* fix style
2024-10-11 17:39:20 +08:00
..
cli Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
ggml Init NPU quantize method and support q8_0_rtn (#11452) 2024-07-01 13:45:07 +08:00
gptq Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
langchain Remove chatglm_C Module to Eliminate LGPL Dependency (#11178) 2024-05-31 17:03:11 +08:00
llamaindex Llamaindex: add tokenizer_id and support chat (#10590) 2024-04-07 13:51:34 +08:00
serving Support lightweight-serving glm-4v-9b (#11994) 2024-09-05 09:25:08 +08:00
transformers update NPU pipeline generate (#12182) 2024-10-11 17:39:20 +08:00
utils Fix auto importer for LNL release (#12175) 2024-10-10 15:17:43 +08:00
vllm Enable vllm multimodal minicpm-v-2-6 (#12074) 2024-09-13 13:28:35 +08:00
__init__.py IPEX Duplicate importer V2 (#11310) 2024-06-19 16:29:19 +08:00
convert_model.py Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
format.sh Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
llm_patching.py Upgrade Peft version to 0.10.0 for LLM finetune (#10886) 2024-05-07 15:09:14 +08:00
models.py Remove chatglm_C Module to Eliminate LGPL Dependency (#11178) 2024-05-31 17:03:11 +08:00
optimize.py support passing None to low_bit in optimize_model (#12121) 2024-09-26 11:09:35 +08:00