Gesture-Timbre Space: Multidimensional Feature Mapping Using Machine Learning and Concatenative Synthesis

This paper presents a method for mapping embodied gesture, acquired with electromyography and motion sensing, to a corpus of small sound units, organised by derived timbral features using concatenative synthesis. Gestures and sounds can be associated directly using individual units and static poses, or by using a sound tracing method that leverages our intuitive associations between sound and embodied movement. We propose a method for augmenting corporal density to enable expressive variation on the original gesture-timbre space.


Categories

Keyword(s)

License

All Rights Reserved