Amphithéâtre Marguerite de Navarre, Site Marcelin Berthelot
Open to all
-

Abstract

Over the last ten years or so, the term " artificial intelligence " has been in the news everywhere, from consumer magazines to start-up creators and political decision-makers. Advances in research into neural networks, an age-old technology, as well as increases in computing power and the mass of available data, have enabled a spectacular acceleration in the performance of artificial intelligence systems. At the heart of this revolution, automatic language processing (ALP) plays a central role. Long known through spelling correction and machine translation, this field of research dedicated to the analysis, generation and transformation of textual data has recently hit the headlines on several occasions, notably with the arrival of ChatGPT.

To provide some insight into these issues, I'll briefly outline several milestones in the development of NLP, showing what goals, approaches and obstacles have marked the history of the field, a history as old as that of computer science. This will enable us to illustrate the evolution of the approaches at work over the decades - symbolic, then statistical and now neural - but also to better understand the specificities of textual data and the difficulties they have posed over the decades, and often still pose today. We will focus on the most recent advances, using ChatGPT as a case study, and look at some of the ethical and other issues involved. We will also show that these rapidly developing techniques are renewing the question of the respective roles of research, innovation and engineering, while questioning scientists on what they can teach us about languages and about ourselves.