Triggering sounds from discrete air gestures: What movement feature has the best timing? Luke Dahl CCRMA, Stanford University
Long Paper
The recent proliferation of affordable motion sensing technologies (e.g. Kinect) has led to a surge in new musical interfaces where a performer moves their body "in the air" without manipulating or contacting a physical object. These interfaces work well when the movement and control of sound are smooth and continuous. However it has proven difficult to heuristically design a system which will trigger discrete sounds with a precision that would allow for a complex rhythmic performance. In such systems the relationship between a gesture and the timing of the resulting sound often feels wrong to the performer.