gguf-py : add Numpy MXFP4 de/quantization support (llama/15111)

* gguf-py : add MXFP4 de/quantization support

* ggml-quants : handle zero amax for MXFP4
This commit is contained in:
compilade 2025-08-08 17:48:26 -04:00 committed by Georgi Gerganov
parent 573bf9d128
commit 62566a5436
1 changed files with 1 additions and 1 deletions

View File

@ -288,7 +288,7 @@ void quantize_row_mxfp4_ref(const float * GGML_RESTRICT x, block_mxfp4 * GGML_RE
}
}
const uint8_t e = (uint8_t) (floorf(log2f(amax)) - 2 + 127);
const uint8_t e = amax > 0.0f ? (uint8_t) (floorf(log2f(amax)) - 2 + 127) : 0;
const float d = GGML_E8M0_TO_FP32_HALF(e);