This website requires JavaScript.
Explore
Help
Sign In
ayo
/
ipex-llm
Watch
1
Fork
You've already forked ipex-llm
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
Actions
1
intel/ipex-llm - Accelerate local LLM inference and finetuning on Intel XPUs
224
commits
1
branch
0
tags
37
MiB
Python
97.1%
Shell
1.8%
Dockerfile
0.4%
Lua
0.3%
C++
0.2%
1ddcaa63fc
Find a file
HTTPS
Download ZIP
Download TAR.GZ
Download BUNDLE
Open with VS Code
Open with VSCodium
Open with Intellij IDEA
Cite this repository
BibTeX
Cancel
Yanzhang Wang
1ddcaa63fc
feat: quantize a whole graph/modules (
#1618
)
...
* feat: quantize a whole graph/modules * feat: python supports * fix: delete unusage
2017-10-11 03:16:48 -04:00