This website requires JavaScript.
Explore
Help
Sign In
ayo
/
ipex-llm
Watch
1
Fork
You've already forked ipex-llm
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
Actions
1
c801c37bc6
ipex-llm
/
python
/
llm
History
Yishuo Wang
c801c37bc6
optimize phi3 again: use quantize kv if possible (
#10953
)
2024-05-07 17:26:19 +08:00
..
dev
LLM: add min_new_tokens to all in one benchmark. (
#10911
)
2024-05-06 09:32:59 +08:00
example
Upgrade Peft to 0.10.0 in finetune examples and docker (
#10930
)
2024-05-07 15:12:26 +08:00
portable-zip
Fix baichuan-13b issue on portable zip under transformers 4.36 (
#10746
)
2024-04-12 16:27:01 -07:00
scripts
improve ipex-llm-init for Linux (
#10928
)
2024-05-07 12:55:14 +08:00
src
/ipex_llm
optimize phi3 again: use quantize kv if possible (
#10953
)
2024-05-07 17:26:19 +08:00
test
Change order of chatglm2-6b and chatglm3-6b in iGPU perf test for more stable performance (
#10948
)
2024-05-07 13:48:39 +08:00
.gitignore
[LLM] add chatglm pybinding binary file release (
#8677
)
2023-08-04 11:45:27 +08:00
setup.py
Support llama-index install option for upstreaming purposes (
#10866
)
2024-04-23 19:08:29 +08:00
version.txt
Update setup.py and add new actions and add compatible mode (
#25
)
2024-03-22 15:44:59 +08:00