ipex-llm/python/llm
hxsz1997 328b1a1de9
Fix the not stop issue of llama3 examples (#10860)
* fix not stop issue in GPU/HF-Transformers-AutoModels

* fix not stop issue in GPU/PyTorch-Models/Model/llama3

* fix not stop issue in CPU/HF-Transformers-AutoModels/Model/llama3

* fix not stop issue in CPU/PyTorch-Models/Model/llama3

* update the output in readme

* update format

* add reference

* update prompt format

* update output format in readme

* update example output in readme
2024-04-23 19:10:09 +08:00
..
dev Update 8192.txt (#10824) 2024-04-23 14:02:09 +08:00
example Fix the not stop issue of llama3 examples (#10860) 2024-04-23 19:10:09 +08:00
portable-zip Fix baichuan-13b issue on portable zip under transformers 4.36 (#10746) 2024-04-12 16:27:01 -07:00
scripts Update Env check Script (#10709) 2024-04-10 15:06:00 +08:00
src/ipex_llm LLM: support llama split tensor for long context in transformers>=4.36. (#10844) 2024-04-23 16:13:25 +08:00
test Add phi-2 to igpu performance test (#10865) 2024-04-23 18:13:14 +08:00
.gitignore [LLM] add chatglm pybinding binary file release (#8677) 2023-08-04 11:45:27 +08:00
setup.py Support llama-index install option for upstreaming purposes (#10866) 2024-04-23 19:08:29 +08:00
version.txt Update setup.py and add new actions and add compatible mode (#25) 2024-03-22 15:44:59 +08:00