Authors:Stephen Sinclair, Jean-Loup Florens, Marcelo M. Wanderley
Publication or Conference Title:Proceedings of the 10ème Congrès Français d’Acoustique
Although human gesture is typically considered to occur at frequencies below 30 Hz, interactions with the environment involve exchange of mechanical energy into the frequency range of sound. These non-linear dynamic situations lend themselves to subtle control over sound-producing phenomena. An example of
this is the bowed string. We propose that high-frequency forces experienced during bowing are important factors for accurate control of bowing, providing critical information about the vibration state of the string.
To help test this hypothesis, we have developed a simulator which, using force-feedback hardware, enables synthesis of both sound and friction forces at audio rates. This real-time simulator allows experimentation not only with acoustic parameters, but the presence of haptic feedback also allows examination of how human gestural interaction is affected by model parameters. We have implemented string models based on two major paradigms: modal synthesis, and the digital waveguide. These are interfaced with the excitation block through a non-linear force-velocity table. These models may be modified to change their fundamental frequency, spectral response, friction characteristics, and body resonance. Using observations of interaction with modified parameters, we plan to develop a sound basis for comparison of different synthesis techniques and their parameters in reference to human gesture.