This website requires JavaScript.
Explore
Help
Sign In
ayo
/
ipex-llm
Watch
1
Fork
You've already forked ipex-llm
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
Actions
1
aa319de5e8
ipex-llm
/
python
/
llm
/
example
/
CPU
/
applications
/
streaming-llm
/
streaming_llm
History
Guoqiong Song
aa319de5e8
Add streaming-llm using llama2 on CPU (
#9265
)
...
Enable streaming-llm to let model take infinite inputs, tested on desktop and SPR10
2023-10-27 01:30:39 -07:00
..
__init__.py
Add streaming-llm using llama2 on CPU (
#9265
)
2023-10-27 01:30:39 -07:00
enable_streaming_llm.py
Add streaming-llm using llama2 on CPU (
#9265
)
2023-10-27 01:30:39 -07:00
kv_cache.py
Add streaming-llm using llama2 on CPU (
#9265
)
2023-10-27 01:30:39 -07:00
modify_falcon.py
Add streaming-llm using llama2 on CPU (
#9265
)
2023-10-27 01:30:39 -07:00
modify_gpt_neox.py
Add streaming-llm using llama2 on CPU (
#9265
)
2023-10-27 01:30:39 -07:00
modify_llama.py
Add streaming-llm using llama2 on CPU (
#9265
)
2023-10-27 01:30:39 -07:00
utils.py
Add streaming-llm using llama2 on CPU (
#9265
)
2023-10-27 01:30:39 -07:00