Sound and Interaction

warning: call_user_func_array() [function.call-user-func-array]: First argument is expected to be a valid callback, 'phpmailer_preview_access' was given in /var/www/web/html/includes/menu.inc on line 454.
Migration and Morphing of Sounds in an Interactive Installation by Ioannis Zannos/ The Metapiano: Composing and Improvising Through Sculpture by Richard Hoadley/ VIVO (Video Interactive VST Orchestra) and the aesthetics of interactivity in the age of care-less capitalism by Fabio Paolizzo and Ruth Cain/ Geodesic Sound Helmets by Cara-Ann Simpson, Eva Cheng, Ben Landau and James Laird/ Capturing gestures for expressive sound control by Todor Todoroff and Cécile Picard-Limpens
Dates: 
Tuesday, 20 September, 2011 - 17:00 - 18:40
Chair Person: 
Geraint Wiggins
Presenters: 
Iannis Zannos
Presenters: 
Richard Hoadley
Presenters: 
Fabio Paolizzo
Presenters: 
Cara-Ann Simpson
Presenters: 
Todor Todoroff
Presenters: 
Ruth Cain
Presenters: 
Eva Cheng
Presenters: 
Cécile Picard-Limpens
Presenters: 
James Laird
Presenters: 
Ben Landau

Migration and Morphing of Sounds in an Interactive Installation

by Ioannis Zannos

This paper describes the techniques used in the realization of a Performance and Installation which explores the gradual "osmosis" between three different sound worlds: The song of swallows recorded at summer above the town of Corfu the song of Weddel Seals recorded in Antarctica (from the macaulay library of marine biology), and the emissions of short wave "Numbers Stations" recorded by amateurs all over the world (from recordings pubished by irdial at http://www.irdial.com/conet.htm). The sound recordings are segmented tools available in the SuperCollider sound processing environment and then the individual parts are analysed by several feature extraction algorithms based on fft data extracted by tools such as SPEAR and LORIS (http://www.klingbeil.com/spear/ and http://www.hakenaudio.com/Loris/). Boid swarming algorithms are used to model the movement of samples in a parametric space, which is then mapped into simulated sound space with multichannel audio. The sounds are transformed in real time by exchanging spectral characteristics through FFT-based processing methods, taking into account the previously extracted spectral characteristics. Several examples are given of the various ways in which the sounds are transformed. The work also includes graphics synthesized in real time based on physical models of particles and fluid modeling. Different ways in which the sound transformation technique are combined with the graphics and how they are contrlled in live performance as well as in an interactive installation are described. These techniques include various multi-touch surfaces ranging from iPhone or iPad to reactable-based technologies to open-handed gesture recognition from video input. 

The Metapiano: Composing and Improvising Through Sculpture

by Richard Hoadley

This paper concerns the design, implementation and demonstration of interactive sculptural interfaces which need to be used by the public as well as by specialist performers such as musicians or dancers.  Options for automatic control are also available.  The set of interfaces under consideration is referred to as the ‘metapiano’, itself a ‘meta-sculpture’ comprising a collection of diverse and independent sculptural items, each of which are being developed to control a sometimes self-referential array of musically expressive algorithms.  In the case of the metapiano the primary sound source, perhaps unsurprisingly, is that of a (synthesised) piano.

The paper describes the manner in which a viewer/performer interacts with hardware and software systems, examines the nature of the music created, and details how the two are related.  Of particular interest is the way in which the resulting music relates to both new and more traditional forms of composition and performance.

VIVO (Video Interactive VST Orchestra) and the aesthetics of interactivity in the age of care-less capitalism

by Fabio Paolizzo and Ruth Cain

Is it possible to use interactive arts, and specifically interactive music, as tool to enhance consciousness? How interactive arts and consciousness intersect with Derrida's concept of a given?

Under capitalism, we increase the requirement for care by focusing on the subject as an independent economic entity, but at the same time we increase the requirement for care, precisely because of the focus on the individual economic attainment. Because of this economic undervaluation, collectivity, sacrificiality, emotional giving and other components of the ¨given¨ become simultaneously intensely problematic and sought after.

The research spans the fields of musicology, socio-legal studies, software development and composition. Working hypothesis of the study is that users, who individually and collectively create and self-reflect in an emergent grammar, as defined by Hopper, may approach a new type of relational consciousness. Relational consciousness is denied and demanded by late capitalism. The development of an apparently relational consciousness online, problematizes issues of (dis)embodiment and global distance, posing the question of how new virtual form of relationship might mitigate the harshness, isolation and anxiety of care-less capitalism.

The present study recognizes that specific structures of interrelation may allow the formation of a such a new consciousness, which might operate independently, escaping the overwhelming control of capital. In music, these structures may support the relation between the consciousness of human agents' self and the musical instrument and musical facts that these users are shaping.

VIVO (Video Interactive VST Orchestra) is an interactive software musical instrument, which implements these structures, and openinmedia.net is the social network that hosts it. Users can collaborate, create, publish and share, both open and copyrighted contents and resources, online.

Geodesic Sound Helmets

by Cara-Ann Simpson, Eva Cheng, Ben Landau and James Laird

Geodesic Sound Helmets (referred to as GSHelmets for brevity) will be a series of immersive and interactive personal sound environments. Currently a work-in-progress, GSHelmets are large geodesic dome-shaped helmet objects containing surround sound flat panel/flexible loudspeakers (FPS), motion detectors and breathing sensors. The use of FPS technology allows the helmets to be slimline and lightweight, where the interior of the helmet is seamlessly integrated into a singular component. Such a design is essential to the philosophy of GSHelmets as the comfort, both physiological and psychological, of the participant is paramount.

When a participant walks into the space they are invited to put their head inside different ‘helmets’ to hear three-dimensional manipulated soundscapes from locations including (but not limited to) Australia, Singapore, Hong Kong and Spain. Motion sensors will be built into the helmet to allow the unit to be switched on automatically when a participant interacts with the object by standing with their head inside it. Similarly, when the participant leaves, the object will turn off the sound. A breathing sensor (i.e., air-flow and humidity) located approximately 10-15cm from the participant’s mouth will react in real-time to manipulate the soundscapes according to the individual’s breathing pattern: the sound is changed as the person breathes faster or slower, more deeply or more shallowly. Surround sound spatialisation in the helmets will initially be through amplitude panning and phase decorrelation, with a view to implementing state-of-art near field 3D sound spatialisation algorithms.

GSHelmets explores differing roles of artist and audience, interactive installation within public interior spaces, new technologies within art, and the importance of physicality. As an exhibition, interactive installation and research collaboration between artist, engineer and audience, GSHelmets questions the validity of the author/artist as sole creator and suggests that the artist lays a foundation for the public to mould and manipulate into his or her own artwork or composition. Thus, the artist’s role within GSHelmets is that of facilitator, while the public become composers and listeners.

Capturing gestures for expressive sound control

by Todor Todoroff and Cécile Picard-Limpens

We present a novel approach for live performances, giving musicians or dancers an extended control over the sound rendering of their representation. Contrarily to the usual sound rendering of a performance where sounds are externally triggered by specific events in the scene, or to usual augmented instruments that track the gestures used to play the instrument to expand its possibilities, the performer can configure the sound effects he produces in a way that its whole body is involved.

We developed a Max/MSP toolbox to receive, decode and analyze the signals from a set of light wireless sensors that can be worn by performers. Each sensor node contains a digital 3-axes accelerometer, magnetometer and gyroscope and up to 6 analog channels to connect additional external sensors (pressure, flexion, light, etc.). The received data is decoded and scaled and a reliable posture information is extracted from the fusion of the data sensors mounted on each node. A visualization system gives the posture/attitude of each node, as well as the smoothed and maximum values of the individual sensing axes. Contrary to most commercial systems, our Max/MSP toolbox makes it easy for users to define the many available parameters, allowing to tailor the system and to optimize the bandwidth. Finally, we provide a real-time implementation of a gesture recognition tool based on Dynamic Time Warping (DTW), with an original ”multi-grid” DTW algorithm that does not require prior segmentation. We propose users different mapping tools for interactive projects, integrating 1-D, 2-D and 3-D interpolation tools.

We focused on extracting short-term features that detect hits and give information about the intensity and direction of the hits to drive percussive synthesis models. Contrarily to available systems, we propose a sound synthesis that takes into account the changes of direction and orientation immediately preceding the detected hits in order to produce sounds depending on the preparation gestures. Because of real-time performance constraints, we direct our sound synthesis towards a granular approach which manipulates atomic sound grains for sound events composition. Our synthesis procedure specifically targets consistent sound events, sound variety and expressive rendering of the composition.