Amphithéâtre Marguerite de Navarre, Site Marcelin Berthelot
Open to all
-

Abstract

To understand the impact of neural networks and the questions they raise, this lecture presents a wide range of applications : speech recognition, natural language processing, prediction of physical phenomena, neurophysiology of perception, as well as the modeling and generation of complex data such as images or sounds.

Speech analysis is one of the oldest fields of signal analysis, emerging as early as the 1960s. Speech analysis algorithms were often based on a spectrogram representation of the signal, followed by a Gaussian mixture model optimized with a Markov chain. These algorithms have been considerably improved by deep neural networks, including for the separation of mixed audio signals, known as source separation. One question is to understand the link between these networks and more conventional recognition algorithms, and to explain the significant improvement they bring.

Natural language processing theories have considerably influenced our understanding of the very notion of intelligence. These theories have evolved from structuralism, which attempts to characterize the aggregation of elementary language structures, to the theories of formal grammars introduced by Chomsky. Computer processing of natural language began with statistical models based on Markov chains and computer models of formal grammars. However, here too, neural networks have brought considerable improvements for applications such as machine translation or the generation of sentences to answer questions. It is these neural networks that are currently used in most commercial applications.