Abstract
Images are matrices of pixels, whose values are proportional to a photon count. This count is a stochastic process due to the quantum nature of light. All images are therefore noisy. Digital algorithms have been proposed to improve the signal-to-noise ratio. These denoising algorithms require the establishment of a model of the noise and the images. Establishing a noise model is relatively easy. Obtaining a good statistical model of the image is much more difficult. Images are a reflection of the world and therefore have the same level of complexity.
The seminar shows that we are probably close to understanding image statistics at the level of pixel patches. Recent algorithms, based on non-parametric models of 8 × 8 pixel patches, are achieving very good results. Mathematical and experimental arguments indicate that we are probably close to the best achievable results. This hypothesis is validated by the convergence of results obtained by all recent techniques. The three main approaches are presented: the Bayesian approach, linear operator thresholding and self-similarity models. Most denoising algorithms can be tested on any image on the Image Processing On Line website.