This website requires JavaScript.
Explore
Help
Sign In
ayo
/
ipex-llm
Watch
1
Fork
You've already forked ipex-llm
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
Actions
1
30d009bca7
ipex-llm
/
python
History
Cengguang Zhang
30d009bca7
LLM: support quantized kv cache for Mistral in transformers >=4.36.0 (
#10326
)
...
* support quantize kv for mistral in transformers 4.36 * update mistral support. * fix style.
2024-03-05 16:23:50 +08:00
..
llm
LLM: support quantized kv cache for Mistral in transformers >=4.36.0 (
#10326
)
2024-03-05 16:23:50 +08:00