Amphithéâtre Marguerite de Navarre, Site Marcelin Berthelot
Open to all
-

Abstract

The lecture began with a reminder of the multi-scale architectures of deep neural networks, their many applications, and the questions we ask ourselves to understand how they work. The aim is to establish the link between architectures, learning algorithms and generalization properties. This means highlighting the organizational principles that enable us to circumvent the curse of dimensionality, thanks to a priori information about the problem. Three types of properties can be distinguished : the separability of components, whether hierarchical or spatial, the existence of symmetries, and the use of parsimonious representations. Parsimony enables efficient coding on a small number of irreducible structures that can be learned. These properties appeal to many branches of mathematics, including statistics, probability, optimization, harmonic analysis and geometry. The lecture focused on applications to image and sound classification, with links to neurophysiological models of vision and hearing.