Gesture Bending Suite (technical overview) [for now mostly a placeholder]:
0. Primary System inputs
1. Feature extraction and Gesture Following (not tracking)
3. Sonification (selected stable and Jamomamized TML instruments):
Detailed System Description, screenshots and Diagrams [Placeholder]
Tutorial Videos [Placeholder]
Examples and videos [Placeholder]
• The adaptability, multidimensionality, and reconfigurability of this system make it particularly useful for working with dancers and performers where immediate creative ideas and interaction scenarios need to be continuously and quickly prototyped, explored, expanded upon, refined, rehearsed and performed. In user-guided machine-dancer improvisations, this chain of creative processes can overlap and evolve in real-time, helping form macro-structures and maximizing artistic expressivity.
FUSION
Gestural sound control is most often based on mapping gesture parameters to sound synthesis parameters. When dealing with a multi dimensional setup, the main difficulty resides in choosing appropriate mapping strategies between low or high level parameters and across different temporal zones. The Gesture Bending Suite fuses these many levels
0. Primary System inputs
- Audio (structural vibration)
- Audio (sound)
- video (motion)
- Sensor data
- Operator/Performer/Designer (osc, controllers, UI)
1. Feature extraction and Gesture Following (not tracking)
- Haptic-acoustic transcoding
- transient and acoustic event detection
- MFCC and extraction and cooking of many other audio descriptors
- Motion Analysis (on video streams and/or sensors data)
- Continuous:
- QoM
- Large / Field
- Small / Local Nuance
- Velocity
- Contraction
- Centroid
- + running XY = local HS flow
- Event Detection (useful for formal ornamentation):
- Stillness
- Presence
- Action followed after stillness (+ quality of action)
- Schmitt Triggers (+ Qualitative values)
- All extracted descriptors are broadcasted to a smart OSC parsing module
- dynamically discovers available “return” parameters and auto populates easy to understand data parsing menus
- Locally choose, parse, scale and map desired descriptors
3. Sonification (selected stable and Jamomamized TML instruments):
- NOTE: All compositionally significant parameters within sonification modules have built in dynamic osc mappers that are able discover available parameters and use them to modulate their immediate state.
- Audio Mosaicing
- using cataRT + descriptor analysis on input audio + descriptor space correlation
- Audio Descriptor Analysis, Descriptor Driven Concatenative Synthesis, Orchestration vs. Mosaicing
- Examples:
- tableTap experiments (chopping, dicing, etc)
- Cambridge / TML tables
- Experiments at the Movement Workshop (Floor)
- Music: Lori Freedman, Vinny Golia, Terror
- Physical Modeling
- Driven simultaneously by direct audio-signals for Excitation/force and audio-descriptors for ornamentation and variance.
- Examples:
- Eggplant
- Tap Dancing (Movement workshop)
- Augmented Harp string (ozone)
- Table tap (Grinding)
- Constellation 2.0 (architecture at FNC2010)
- Formant and Voice Synthesis (Basil, Plant)
- Multi Channel Overlap Granular Processing
- Modulation of many parameters at once leading to formation of highly complex sound events that evolve and morph in synch with movement according to a dynamically interpolatable interaction space
- Example of some mappings:
- Wall of Sound
- Ice-cracking
- Breathing candle
- Particle Sound
- Ambient and subtle (responsive acoustic ecology)
- Sonified Breakdancing, Cambridge and Black Box Experiments, Roger Sinha, etc.
- Sound > Sound Mappings and real-time audio processing
- Augmented Room Acoustics
- Granular Processing and other DSP on realtime audio signals
- Examples
- Pneuma (Champ Libre)
- Mixology Festival at Roulette New York
- Spatialization is used for a few reason (selected):
- Separation of sound from source and to enhance auditory illusions
- To help induce feelings such as emersion, scatter, floating, flying, hyper activeness, and etc
- To finely control the tactile dimension of sound
- To modulate the apparent acoustics of a space
- To poetically correlate in real-time a sound’s spectromorphology with it’s movement in space
- Current research: Structure-Born sound: sound radiation using structural vibrations
- cross-modal and tactile-sound
- integration of structure-borne sound with classic room effect synthesis, spherical harmonics and other spatialization practices (ie. in ED where sound moved from above and traveled underneath the sand)
Detailed System Description, screenshots and Diagrams [Placeholder]
Tutorial Videos [Placeholder]
Examples and videos [Placeholder]
- Projects
- Experiments
- Applications
• The adaptability, multidimensionality, and reconfigurability of this system make it particularly useful for working with dancers and performers where immediate creative ideas and interaction scenarios need to be continuously and quickly prototyped, explored, expanded upon, refined, rehearsed and performed. In user-guided machine-dancer improvisations, this chain of creative processes can overlap and evolve in real-time, helping form macro-structures and maximizing artistic expressivity.
FUSION
Gestural sound control is most often based on mapping gesture parameters to sound synthesis parameters. When dealing with a multi dimensional setup, the main difficulty resides in choosing appropriate mapping strategies between low or high level parameters and across different temporal zones. The Gesture Bending Suite fuses these many levels
- Ultra low-level instantaneous mappings (coupling): ie. audio-driven physical models (pre-segmentation, sample accurate, audio-rate)
- The tone, texture and intensity of the synthesised sound is influenced by the actual sound or haptic interaction with matter as picked up on the piezo.
- However, additional descriptors, extracted from the audio and cameras, further influence other synthesis parameters such as damping, pitch, and harmonicity. (So for example you could use the same gesture in different places on the surface and it would create the sound in a different pitch)
- Mid-level short-term mappings: descriptor > synthesis (explained above in b.)
- Mid-level short-term coupling: automatic selection or Audio Mosaicing (analysis>descriptors>correlation>synthesis)
- High-level short-term mappings: throw, hit, location, contraction
- High-level meso-scale mappings: segmentation, spin, scatter, harmonicity, gesture recognition
- High-level long-term(temporal) mappings: entrainment, adaptation, mood-changes