Amphithéâtre Marguerite de Navarre, Site Marcelin Berthelot
Open to all
-

Abstract

The last lecture reviews the mathematical principles guiding dimensionality reduction for classification or regression. It shows applications in speech processing, image recognition and functional regression in physics, in particular for calculating the quantum energy of molecules. He makes the link between convolutional architectures and the wavelet scattering transform, which is obtained by iterating over wavelet transforms with phase-suppressing modules.

Scattering networks include wavelet filters optimized on the basis of a priori signal information. This transform is implemented in a deep neural network whose filters are therefore not learned. The scale separation and deformation stability of the wavelet scattering transform is sufficient to obtain state-of-the-art results on classification problems that are not too complex.

For the classification of images including complex structures, an a priori defined representation does not seem sufficient to achieve the results obtained by learned neural networks. This learning can be captured by parsimonious representations, in dictionaries that have to be learned according to the classification task. This learning process and the properties of the resulting dictionaries are still poorly understood.

Documents and media