Consistent Music and Animation Cooperation by Physical Models

Maria Christou
Laboratoire ICA

Olivier Tache
ACROE

Claude Cadoz
ACROE & Laboratoire ICA

Nicolas Castagné
Laboratoire ICA

Annie Luciani
ACROE & Laboratoire ICA

After decades of separate development, computer-based audio and graphics technologies are increasingly used in conjunction, particularly in industry and arts, for designing rich virtual scenes with interactive, non-scripted, relationships between the audio and the graphical components.

One way to integrate sound and graphics in a virtual scene is to have a single process that generates both channels. While this approach seems obvious from a theoretical point of view, it is difficult to put in practice since audio and graphic technologies are not based on directly compatible concepts and representations. Physical modeling has been proposed as a solution to this question and real-time multisensory simulations, addressing the visual, audio and haptic perceptions, have been realized since the 80’s. We will present examples of such simulations based on the CORDIS-ANIMA system [1] - whose primary design goal was precisely the simulation of multisensory scenes - and we will present conclusions from their realization and usage.

The conception of environments for modeling and rendering audio-graphics scenes is a difficult problem currently investigated by many researchers in different domains. No such environment exists today for mass-interaction models. However, high-level, sophisticated environments exist for each modality : GENESIS [2] for the audio part and MIMESIS [3] for the graphics part. Each provides an extensive set of tools that are completely adapted to its own domain and greatly facilitate the creative process. We are currently investigating the connection between these environments with the aim to produce audiovisual pieces while using the advantage of the specific functionalities developed on each side. We will present the different technical possibilities and focus on the currently developed solution, which is based on the communication through the GMS (Gesture and Motion Signal) file format [4]. We will also present some aspects of the specific creative process involved by the collaboration between two modalities and between two creators.

Short videos produced with physical models, by Maria Christou and Olivier Tache : a GENESIS model generates the audio part, controled by a MIMESIS model which generates the visual movements. The communication between the models is made though the GMS file format. 

Please Register or Log in to view the full article