Abstract
Arshia Cont, a researcher at IRCAM [1], is developing the Antescofo system for the performance of mixed human/electronic pieces, which amplifies the idea that electronics should be finely directed in real time by humans. After presenting the computational issues involved in human/computer interaction, he explained the core of Antescofo, which is a set of algorithms that adaptively and anticipatively track scores by understanding in real time how human musicians organize and vary their tempo and phrasing to give the desired musicality to the work, and then finely adapt the electronic playing to this dynamic tempo. These advanced algorithms are derived from probabilistic signal processing and the neuroscience of sound and time perception, with hierarchical modeling of the various times involved, from the logical time of the score to the continuous or discrete micro- and macro-time of the performance, via the computer's own time. The electronic sounds triggered by Antescofo are calculated by the industry-standard Max/MSP system, using either classical signal processing or physical instrument modeling. The triggering of these computer sounds is controlled by an algorithmic language directly based on our ideas of synchronous languages, which enables complex computer musical phrases to be associated with temporal events (notes, durations) and controlled. One difficult point is catching up with the inevitable errors of the instrumentalists or the tracking software. This is essential, because whatever happens, the show must go on. All this was illustrated in real time by examples and a dialogue between clarinettist Jérôme Comte (Ensemble inter-contemporain) and the computer.
Antescofo is used in many contemporary music concerts around the world (including all recent works by P. Manoury and other musicians such as M. Stroppa), and is much appreciated by composers. This will be an ideal laboratory for future work on algorithmic scores, of which the current Antescofo language can be seen as a first embryo.