Gesture-Timbre Space: Multidimensional Feature Mapping Using Machine Learning and Concatenative Synthesis Michael Zbyszynski Balandino Di Donato Atau Tanaka 2381/12361535.v1 https://figshare.le.ac.uk/articles/conference_contribution/Gesture-Timbre_Space_Multidimensional_Feature_Mapping_Using_Machine_Learning_and_Concatenative_Synthesis/12361535 <p>This paper presents a method for mapping embodied gesture, acquired with electromyography and motion sensing, to a corpus of small sound units, organised by derived timbral features using concatenative synthesis. Gestures and sounds can be associated directly using individual units and static poses, or by using a sound tracing method that leverages our intuitive associations between sound and embodied movement. We propose a method for augmenting corporal density to enable expressive variation on the original gesture-timbre space.</p><div><br></div><p></p> 2020-05-26 11:45:03 Uncategorised value