Abstract
In the early days of electronic music, sounds were recorded and then reproduced in a predefined tempo. This contrasted with the fact that musicians play in constant interaction with each other and with the score, communicating by sight as much as by sound, and managing their tempo according to several parameters: expressive choices, acoustics of the venue, and so on. To give the performers more scope, K-H. Stockhausen could insert sequences of instrumental freedom between the electronic parts. P. Manoury wanted to go much further, associating electronics with the intimacy of the music through genuine interaction between performers and synthesized sounds. This was made possible by the use of computers, which he saw as fulfilling three functions: the synthesis of predetermined sound sequences, the ability to generate random sequences, and the possibility of dynamically understanding the environment and adapting to it thanks to appropriate sensors. A major advance has been score tracking, which slaves the tempo of electronic music to that of the performers by analyzing the sounds they produce; it was developed by Miller Puckette [1] and first used in Manoury's piece Jupiter. Antescofo, presented below, has taken this approach one step further, with a detailed analysis of the tempo variations essential to musical expression.