ipex-llm/python/llm
Yang Wang 99b05ba1dc
separate prefill into a process (#11787)
* seperate prefill into a process

* using model.share_memory()

* might work

* worked

* use long prompt

* refactor

* cleanup

* fix bug

* clean up

* changable inter and intra process stages

* refactor

* add max output len

* fix npu_model changes that may cause generate down

* fix npu_model generate import error

* fix generare forward error

---------

Co-authored-by: sgwhat <ge.song@intel.com>
2024-08-19 17:53:36 +08:00
..
dev add MiniCPM-Llama3-V-2_5 into all-in-one benchmark (#11849) 2024-08-19 17:51:16 +08:00
example separate prefill into a process (#11787) 2024-08-19 17:53:36 +08:00
portable-zip Fix null pointer dereferences error. (#11125) 2024-05-30 16:16:10 +08:00
scripts fix typo in python/llm/scripts/README.md (#11536) 2024-07-09 09:53:14 +08:00
src/ipex_llm separate prefill into a process (#11787) 2024-08-19 17:53:36 +08:00
test Remove gemma-2-9b-it 3k input from igpu-perf (#11834) 2024-08-17 13:10:05 +08:00
tpp OSPDT: add tpp licenses (#11165) 2024-06-06 10:59:06 +08:00
.gitignore [LLM] add chatglm pybinding binary file release (#8677) 2023-08-04 11:45:27 +08:00
setup.py update doc/setup to use onednn gemm for cpp (#11598) 2024-07-18 13:04:38 +08:00
version.txt Update setup.py and add new actions and add compatible mode (#25) 2024-03-22 15:44:59 +08:00