This website requires JavaScript.
Explore
Help
Sign In
ayo
/
ipex-llm
Watch
1
Fork
You've already forked ipex-llm
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
Actions
1
1de13ea578
ipex-llm
/
python
/
llm
/
example
/
GPU
/
HF-Transformers-AutoModels
/
Advanced-Quantizations
History
Wang, Jian4
fe8976a00f
LLM: Support gguf models use low_bit and fix no json(
#10408
)
...
* support others model use low_bit * update readme * update to add *.json
2024-03-15 09:34:18 +08:00
..
AWQ
[LLM] Support llm-awq vicuna-7b-1.5 on arc (
#9874
)
2024-01-10 14:28:39 +08:00
GGUF
LLM: Support gguf models use low_bit and fix no json(
#10408
)
2024-03-15 09:34:18 +08:00
GGUF-IQ2
LLM: Update qkv fusion for GGUF-IQ2 (
#10271
)
2024-02-29 12:49:53 +08:00
GPTQ
Update llm gpu xpu default related info to PyTorch 2.1 (
#9866
)
2024-01-09 15:38:47 +08:00