Abstract
Audio and image compression algorithms are often based on an orthogonal transformation that produces many small coefficients that can be approximated by zeros. The resulting coefficients are quantized and then represented in binary form using entropy coding. We study the error introduced by these quantizers under the assumption of high-resolution quantization.
We demonstrate that the quantizer that minimizes the error under bounded entropy is a uniform quantizer. For optimal coding in an orthogonal basis, it is also necessary to allocate bits for the different coefficients to be coded. The optimal allocation for a Euclidean metric is obtained with the simplest quantizer : the one that quantizes all coefficients with the same uniform quantization step. To adapt to auditory or visual sensitivity, quantization steps are proportional to weights that approximate the perceptual metric from a weighted Euclidean distance.