This website requires JavaScript.
Explore
Help
Sign In
ayo
/
ipex-llm
Watch
1
Fork
You've already forked ipex-llm
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
Actions
1
ff83fad400
ipex-llm
/
python
/
llm
History
Qiyuan Gong
15a6205790
Fix LoRA tokenizer for Llama and chatglm (
#11186
)
...
* Set pad_token to eos_token if it's None. Otherwise, use model config.
2024-06-03 15:35:38 +08:00
..
dev
LLM: fix input length condition in deepspeed all-in-one benchmark. (
#11185
)
2024-06-03 10:05:43 +08:00
example
Fix LoRA tokenizer for Llama and chatglm (
#11186
)
2024-06-03 15:35:38 +08:00
portable-zip
Fix null pointer dereferences error. (
#11125
)
2024-05-30 16:16:10 +08:00
scripts
add ipex-llm-init.bat for Windows (
#11082
)
2024-05-24 14:26:25 +08:00
src
/ipex_llm
Remove chatglm_C Module to Eliminate LGPL Dependency (
#11178
)
2024-05-31 17:03:11 +08:00
test
Add Windows GPU unit test (
#11050
)
2024-05-28 13:29:47 +08:00
.gitignore
[LLM] add chatglm pybinding binary file release (
#8677
)
2023-08-04 11:45:27 +08:00
setup.py
Remove chatglm_C Module to Eliminate LGPL Dependency (
#11178
)
2024-05-31 17:03:11 +08:00
version.txt
Update setup.py and add new actions and add compatible mode (
#25
)
2024-03-22 15:44:59 +08:00