Salle 5, Site Marcelin Berthelot
Open to all
-

Abstract

To avoid the curse of high dimensionality, we can try to reduce the dimension of the data, if it belongs to a subset of smaller dimension than the dimension d of the original space. Data reduction is at the heart of signal processing, with numerous applications in compression, denoising and inverse problems. The aim is to approximate the signal with a minimum number of variables.

We study the approximation of signals x decomposed in an orthonormal basis. A linear approximation approximates x from the first M vectors of the basis. The approximation error is related to the decay rate of the coefficients of x in the basis. Non-linear approximation reduces the approximation error by selecting the M coefficients of x with the largest amplitudes. The selection of these coefficients is a non-linear function of x. The approximation error then depends on the decreasing amplitude of the ordered x coefficients.

The lecture introduces an application to denoising, where the signal is contaminated by additive noise assumed to be Gaussian and white. A linear noise reduction algorithm projects the noisy signal into a space of dimension M, which is chosen to approximate the signal as closely as possible, while eliminating a large proportion of the noise. Optimization of the M dimension results from optimization of the bias and variance errors. The bias is the approximation error of x, while the variance measures the noise energy not eliminated by the projection. A non-linear approach is also studied, by thresholding the coefficients of the noisy signal in an orthonormal basis.