intel/ipex-llm - Accelerate local LLM inference and finetuning on Intel XPUs https://github.com/intel/ipex-llm/
Find a file
Yanzhang Wang e8244efb6c fix: make the length of weight and gradWeight in quantized conv same length (#1692)
* fix: the different length of weight and gradWeight in quantized conv is
very confused. So make them same length.

* fix: return grad
2017-10-20 04:33:06 -04:00