University of Leicester
Browse
Zbyszynskietal2019Gesture-TimbreSpace-MultidimensionalFeatureMappingUsingMachineLearningandConcatenativeSynthesis.pdf (3.01 MB)

Gesture-Timbre Space: Multidimensional Feature Mapping Using Machine Learning and Concatenative Synthesis

Download (3.01 MB)
conference contribution
posted on 2020-05-26, 11:45 authored by Michael Zbyszynski, Balandino Di Donato, Atau Tanaka

This paper presents a method for mapping embodied gesture, acquired with electromyography and motion sensing, to a corpus of small sound units, organised by derived timbral features using concatenative synthesis. Gestures and sounds can be associated directly using individual units and static poses, or by using a sound tracing method that leverages our intuitive associations between sound and embodied movement. We propose a method for augmenting corporal density to enable expressive variation on the original gesture-timbre space.


Funding

The research leading to these results has received funding from the European Re-search Council (ERC) under the European Unions Horizon 2020 research and inno-vation programme (Grant agreement No. 789825)

History

Citation

14th International Symposium on Computer Music Multidisciplinary Research (CMMR). Marseille, France 14-18 October 2019

Source

14th International Symposium on Computer Music Multidisciplinary Research (CMMR)

Version

  • AM (Accepted Manuscript)

Published in

14th International Symposium on Computer Music Multidisciplinary Research

Acceptance date

2019-06-23

Copyright date

2019

Spatial coverage

Marseille, France

Temporal coverage: start date

2019-10-14

Temporal coverage: end date

2019-10-18

Language

en

Usage metrics

    University of Leicester Publications

    Categories

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC