intel/ipex-llm - Accelerate local LLM inference and finetuning on Intel XPUs
Find a file
Yanzhang Wang 8bf29dc119 feat: serilization for quantized modules (#1613)
* feat: serilization for quantized modules

All quantized modules extending QuantModule, which has an empty Tensor
for gradient. And the object mixes QuantSerializer for protobuf
 supporting.

* refactor: serialization api changes
2017-10-10 04:21:36 -04:00