Dr. Garth Paine

Dr Garth Paine is Senior Lecturer in Music Technology, a researcher at MARCS Auditory Research labs and director of the Virtual, Interactive, Performance Research environment (VIPRe)http://vipre.uws.edu.au/ . He is particularly fascinated with sound as an exhibitable object. This passion has led to several interactive responsive environments where the inhabitant generates the sonic landscape through their presence and behaviour. It has also led to several music scores for dance generated through video tracking of the choreography. His work has been shown throughout Australia, Europe, Japan, USA, Hong Kong and New Zealand.Dr Paine is internationally regarded as an innovator in the field of interactivity in experimental music and media arts. He is an active contributor to the International NIME conference and has been guest editor of Organised Sound Journal on several occasions. Dr Paine’s ensemble SynC (http://www.syncsonics.com) acts as a platform for research into new interfaces for electronic music performance. SynC has performed in, Paris (2006), New York (2007), Liquid Architecture (2007), Aurora festival (2006, 2008), and The Australian New Music Network concert series (2008).

He is a member of the advisory panel for the Electronic Music Foundation, New York and an advisor to the UNESCO funded Symposium on the Future. Dr Paine is a Chief Investigator on several current Australian Research Council grants.

His work can be found at http://www.activatedspace.com.

What is VIPRE?

The Virtual and Interactive Performance Research Environment (VIPRE) ( http://vipre.uws.edu.au/?page_id=5) lab represents world class multi-disciplinary research infrastructure that provides a link between the School of Communication Arts and MARCS Auditory Labs (psychology) at the University of Western Sydney. VIPRE makes it possible to engage in national and international invitations for collaborative research in virtual and interactive performance spaces, including video tracking for realtime sound/music and visual environment generation, telepresence and communication technologies as distributed performance environments, dynamic scenes for performance and broadcast applications, spatial audio research. Inter-institutional and multi-disciplinary research (such as the MH&MRC Thinking Head project, and ARC funded research on musical interfaces and music perception) encourages and supports current research into how the human body, and new media arts technologies have become integral to contemporary culture.

The VIPRE lab has facilitated the development of integrated systems for Motion Capture (MoCAP), optical gesture tracking and realtime wireless bio-sensing used in 2008 for interactive dance performances. The lab also includes an eight channel state of the art recording studio and a 20 channel 3D spatial loudspeaker array in the performance studio for detailed audio spatialisation research.  Music interfaces are the third area of specialisation, with VIPRE holding a collection of innovative musical interfaces, in many cases the only ones of their kind in Australia, and hosting research into design principles for new interfaces for musical expression, and innovative approaches to new interfaces for music therapy and other health related applications. VIPRE also hosts high end animation and video tools with two large high resolution projection screens in the performance studio forming a virtual 3D visual space.  Unique to VIPRE is the detailed integration of all of these systems and audio/video/data networking throughout the building allowing all facilities to be combined for a single project in the performance studio, or to allow node based computing for high demand rendering and realtime performance projects.

Collaborative Research Partners: include the following: MARCS laboratories, McGill and Concordia Universities (Canada), Leeds , De Montfort and Sheffield Hallam Universities (UK), Rensselaer Polytechnic Institute and Arizona State University (USA). The ARC research network in human Communication Sciences (Dale, Burnham, Dean, Stevens, Paine), ARC live events research network (Atherton, Goodall, McGillivray, Paine); QUT – Australasia CRC on interactive design; Flinders University Australian performance Laboratory, Macquarie University, UNSW, Sydney University, RMIT. Industry partnerships existed between MARCS and the Australian Choreographic Centre and the VCA, AusDance, Australia Council for the Arts, Thumtronics (ThuMP project (Paine, Stevenson, Stevens).

Would you tell us about TIEM?

TIEM (Taxonomy for Interfaces for Electronic Music) is a current Australian Research Council grant, developing a theoretic base for new interfaces for electronic music performance http://vipre.uws.edu.au/tiem

TIEM seeks to develop a unified theory of practice for the application of new interfaces for real-time electronic music performance.  It preferences the perspective of analysis of artistic content, rather than the implementation of specific technological solutions.  The area of new interfaces for musical expression is very heterogenous, illustrating a predilection to idiosyncratic approaches and subsequently lacking a theoretical base that equates to the existent norms and contextualizing premise associated with acoustic instrument practice.  The perspective proposed above moves from a predominantly technological imperative to qualitative responses from the composer, performer, and the audience as the basis for classification and taxonomy development. This enables the Project to operate from a position of analysis of praxis, that is the interchange between artistic practice and research.

AIMS

-Develop models of performance for electronic music that are based on the broad movement from pitch as the principle musical vehicle to timbre.

-Development of organising theories for timbre control in live, realtime electronic music performance – monophonic and polyphonic.

-An analysis and review of existing new interfaces for musical expression, seeking to develop a taxonomy of approaches and out comes on the basis of a classification of observation, artistic intent and empirical measurement

HOW

-Extension of model of control for acoustic instruments developed in the ThuMP project through broader interviewing of virtuoso acoustic instrumental performers and teachers.

-Development of general rules of performance from the above research and develop an artificially intelligent computer system that will guide the performer in terms of options available and variability of access to timbrel parameters during performance.

-Symposia with all partners to brain storm approaches and refine software development models and brief

OUTCOMES

-Documented performance practice for alternative new interfaces for musical expression.

-Proposed Taxonomy for DMI

-Possible future research with international partners on the development of a new electronic music performance interface that makes optimum use of the outcomes of the research and the developed models.

How do you think the difference between music and sound art?

Music is about sound. Sound Art is about sound.  So for me the only differences are with regards to the manner of presentation and the approach to from and structure, given that in an installation work, visitors come in and out of the work at will.

Much music teaching focuses on the abstract formal strictures of music – harmony and counterpoint, the range of instruments and their function in orchestration etc.  I would protest that all this is somewhat irrelevant if you can not feel, see and engage with the qualities of sound itself.

Is the sound heavy, dense, thin, light, sparkly, sharp, intense, dynamic etc – I think of composing as sculpting – I think of the sound as a viscous, fluid medium, sometimes I think of it as a lump of clay on a pottery wheel and we can draw it up and make it into forms of our choosing to represent something of our experience in the world. You can use one note/sound to call into being the entire universe, to create a vast emptiness as though in front of you is contained all the planets of the solar system in a peaceful solace. Or, you can use a single sound to send a chill up the spine, to make the space intimate, clammy, scary and threatening. What this means is that your music has the power to condition the space you and your audiences occupy – to change the scale and the emotional energy of that space.

Good film music is a masterful example of this approach to composition, and you will see that it often engages electroacoustic techniques. The sound effects and the music work together to create the emotional world of the film. The same is true in theatre and music for contemporary dance. All of the above illustrates some of the ways in which we engage in music as sound – as pure abstract communication. Electronic processes allow the expansion of an acoustic instrument or found sound to express a multitude of emotional states. Edgar Varese often called for new electronic instruments in order to realize his dream of a music «set free from the crippling forces of tonality» – the only purely electronic work he made was Poeme electronique (1957-8) for the Phillips Pavilion at the 1958 World Fair. One of his other works, Deserts (electronic tape music and orchestra), from 1953, was the first of two electronic works he wrote after not composing for nearly 20 years due to his frustration with the strictures of tonal music and acoustic instruments.  Around this same time John Cage composed Imaginary Landscape no.1 (1939) for magnetic tape, and Stockhausen completed Studie I.  These composers were actively engaged in seeking sounds beyond the acoustic instrument and the formal notions of harmony and counterpoint. An explosion of electronic and electroacoustic works came there after in the 1960’s and onwards – I feel these composers thought about the use of sound as raw material for sonic composition, and understood how important Timbre is to musical composition.

Can you describe your experience as an installation artist?

In my teenage years, before undertaking studies at the Conservatory of music, I studied architecture for a few years. Music however continued throughout this period, like a siren on a rock, to draw me away from architecture towards a full-time commitment to music alone.  I have continued a strong interest  in at perception of space, the construction of experience, and the way in which architectures can be created through forms other than the built structure. When I finished my conservatory and training, I worked as the composer in residence for the state theater company of Tasmania, And also for the local puppet theater and dance company.  These experiences all linked back to my interest in architecture. They were about constructing affect within a defined space, but transforming that physical space into a number of distinct emotional, environmental and contextual spaces. More often than not the environment is communicating the various layers of emotional transformation of which the work, be it a play or a dance or puppetry piece, seeks to engage the audience within.

The first time that I really made an installation work was in 1992 when working in London with a dance theatre company called Second Stride. The director of that company, Ian Spink, was a great visionary, and had an extraordinary ability to bring very abstract experiences to the stage. I was involved in a one week workshop developing ideas in which I was given the freedom to build a small interface from the brains of the basic synthesizer, and connect to that a series of floor pad triggers that drove samples and basic algorithms, which became an inherent aspect of the performance work. We created two experiments using this system, one with a dancer alone and one with an actor, who was a newsreader and had various hypothetical stories about the extension of time and the colapse of the weather system.

it was off to these experiences that I started thinking about the creation of public experiential environments which led to a series of works starting with «moments of a quiet mind», and then «ghost in the machine» followed by the words «Map 1» and I «Map 2» which was commissioned by the musical instrument Museum in Berlin for their millennium exhibition.

A number of other installation works have occurred since that time, however I have not made an installation work for a few years now, as I am really seeking to build an infrastructure that will give me considerably more information about the Baha’I fear passions of individuals within an immersive environment in order to create more detailed and nuanced audiovisual responses to their engagement in that environment.

Some of this work has involved motion capture, and the development of an open infrastructure using open sound control, in order to bring video tracking, motion capture and bio-sensing data all into the same space.  The thinking head project ( http://thinkinghead.edu.au//), working with the artist Stelarc and a large team of computer programmers and psychologists, is developing a number of these techniques including a emotion tagging and tracking, using a multimodal approach analyzing voice, position, gesture, and language. I hope to be able to apply these technologies to new interactive immersive environment installations within the next few years.

What is «MeteroSonics»?

Meterosonics is an online musical instrument that generates music from realtime weather data utilising a custom built software infrastructure and custom built weather station hardware.  The project is available 24/7 via the internet, where users are able to select the sound synthesis instrument(s) they wish to use, and then select the weather data (wind speed, temperature, wind direction and solar radiation, etc to ‘perform’ the sound synthesis algorithms they have selected.  A collection of sound synthesis instruments is offered, and more experienced users are able to dynamically create an ‘orchestra’ of synthesis instruments and access an advanced patching matrix allowing them to modulate one synthesis parameter with another to generate more complex and evolving musical scores, utilizing the wind speed, wind direction, solar radiation and temperature from the existing weather stations.   The system is designed so that any weather station with an IP address can be added.

Also it is possible via the plug-in architecture for anybody with Jsyn  experience to write additional sound synthesis algorithms, which would be added to the system.

To this extent the system and the project was designed to be a communal project, that provided a different approach to the perception of specific qualities of an environment, that is the weather.  In addition to the compiled client application available from the website, artists can apply to the or for to have access to an Open Sound Control server, which makes the weather data available to any end user for realtime algorithmic control of theatre lights, musical synthesis, interactive sculptures etc.

Meterosonics  was a development from my REEDS (2000) project, which was designed to respond to weather data, specifically wind speed, wind direction, temperature and solar radiation to control sound synthesis algorithms.  However, Reeds  was a large physical installation, consisting of 21 floating fabricated pods of river reeds. It was installed in the Ornamental Lake of the Royal Botanic Gardens Melbourne. Weather data, transmitted to a central computer from small weather stations built into some of the floating pods, was mapped to control instrument algorithms running in SuperCollider, changing the pitch, texture and intensity of the sound. The synthesis results were then transmitted back to floating pods equipped with speakers, broadcasting an electroacoustic soundscape over the lake.

Meterosonics came about as a way of extending the life of the REEDS  project, and opening up the experience of real-time data sonification to a broader audience.

As is often the case, it was not possible for me to store the large physical sculptures for the REEDS  installation, they had unfortunately to go to refuse, with the exception of a few, which friends to to decorate their gardens.  The ideas expressed in REEDS, were, I thought, worth continuing with. I had originally chosen the weather  as the data source because I had become aware in my other installations that use the video tracking of the human form, that there was always a dependable variation, meaning that in order for a human to go from one place to another they must move through all the places in between.  The weather the presented the opportunity of truly chaotic interrelationships between the various sources I chose.  It was unlikely that the solar radiation and wind direction patterns would ever occur in exactly the same way, with exactly the same form, twice. I was particularly interested in how these randomness might be absorbed by the careful selection of synthesis algorithms and mapping approaches to form what might be perceived to be a holistic musical form.

At this moment due to changes in my Internet provider, the Meterosonics Web server is down. I’m hopeful that it will be fully operational again in the next six months.

And the «Endangered Sound Project»?

Endangered Sounds is a project that focuses on the exploration of Sound Marks (trade marked and Patented sounds).  The initial stage of this project comprised legal searches that resulted in the listings of Sound Marks registered in Australasia and the United States of America.  This list was published on the Internet ( http://www.activatedspace.com/Installation_Works/EndangeredSounds/endageredSounds.html) with a call for volunteers to collect samples of the listed sounds internationally.  As the project is ongoing, volunteers are still welcome. Anybody can volunteer directly on the website and will be then sent the appropriate paraphernalia.

The volunteer was sent a specimen tube with label and cap, and asked collect the sound by placing the specimen tube close to the source (thereby capturing the air through which the sound travelled), securing the cap and then completing the label, documenting the time, place and nature of the sound (Sound Mark Reg. No., Sound Mark Description, Time of Capture, Date of Capture, Location etc).  These specimen tubes were collected and displayed in chemistry racks in the exhibition in the Biennale of Electronic Arts. Perth in 2004, illustrating the frequency and diversity of the environment into which these ‘private’, protected sounds have been released.

The exhibition of Endangered Sound consist of:
1.    A web portal listing all the Sound Marks listed in Australasia and the USA.  
2.    A collection of Sound Marks in specimen tubes with caps and labels gathered internationally by people who volunteered to collect samples of Sound Marks in their environment.  
3.    A number of glass vacuum desiccator vessels containing a small loud speaker and sound reproduction chip suspended in a vacuum, reproducing Sound Marks in the vacuum, notionally breaking the law, but as sound does not travel in a vacuum the gallery visitor hears no sound – what then is the jurisdiction of the Sound Mark?
4.    A card index register of Lost and Deceased sounds.

This project questions the legitimacy of privatising and protecting sounds that are released at random in public spaces.  If I own a multi-million dollar penthouse in a city, and work nigh shifts, I have no recourse against the loud Harley Davidson or Australian Football League (AFL) siren that wakes me from my precious sleep – both sounds are privately protected, making their recording, reproduction and broadcast illegal.

In early 2001, NASA announced they had captured an echo of the Big Bang!!    I immediately had an image in my head of a wind beaten astronaut hanging out a porthole of a distant space ship, specimen tube in hand, swinging madly at arms length to capture a sample of the echo, an invisible artefact identifiable only by sophisticated sensors onboard.  Once gathered, this sample is corked, labelled and safely archived.  Of course this is a phantasmic vision, but it was my initial response, and stands as the inspiration for the Endangered Sounds project.

You do some of your live performances using a graphic tablet as an interface, as part of a distributed electronic system, right? Could you explain us how do you work with it and your aesthetical approach as well as the relationship you want to establish with the audience? I think that probably here fits your interest and work with the Intention/Reception project. Could you talk about your experience with it?

My exploration of alternative interfaces for electronic music performance has been principally driven by an interest in establishing communication with my audience. In a acoustic music environment, the excitation and sonification link within the instrument are clear. Whilst the audience may not know the detail off the technique the musician is applying, they understand as a high level the relationship between breath, force, speed etc. and the musical outcomes they are experiencing. This is usually not the case when musicians are using laptop computers as their principal instrument. I have therefore sought to explore interfaces that provide clear and explicit gestural action. The Wacom tablet is a good example of this, for although it is possible to make extremely small movements, which may not be perceived by the audience, it is just as likely that the Performa will use larger sweeping movements, or in fact movements that accentuate the angle and rotation of the pen to the tablet, and utilize these gestures for appropriate musical outcomes such as the building of tension or suspense in the music.

I have also worked with the Nintendo WiiMote quite a lot. I’ll are reasons. It is extremely cheap to purchase when one considers that it contains a wireless communication device, a three dimensional accelerometer and a number of buttons. It therefore provides a broad range of continuous control data and simultaneously a range of event sources. With the addition of the Nintendo Nunchuck, a second three dimensional accelerometer and a joystick are added to one’s arsenal.

Both of these interfaces provide dynamic gestural control of electronic musical processes, be they improvisatory or highly structured in nature, and provide an avenue by which the audience, often unfamiliar with the aesthetic qualities of the music I’m composing and performing, can engage with the nature of the performance, that is that the music is being generated live in front of them, and therefore that the associated sense of risk, urgency and validity are shared.

It has been on the interest to me for some time to explore the way in which interfaces provide «playability» in the musical context. I have been involved in the NIME  conference virtually since its inception and had noticed over the years that whilst there were presentations of new devices, there was very little work going on that sort to establish a comparative framework for new musical interfaces or instruments. The  Taxonomy of Interfaces for Electronic Music (TIEM) (http://vipre.uws.edu.au/tiem/) project has sought to establish a survey of existing interfaces internationally, and to analyze that data in order to provide a starting point for a discourse on the comparative analysis that might lead for instance to a taxonomy of new interfaces for electronic music performance.

You also perform using bio-sensors. How are you using them? What are you interested in measuring on the human body? And how do you use the results coming from any of those biological sensors? And finally… how do you connect the virtuality and the reality to achieve your artistic goals?

Over the last few years, I have been using bio-sensors quite a lot with contemporary dancers.  Specifically, I have been working on a piece called the «Darker edge of night», with the  artist/choreographer, Helen Sky –  see videos at http://vimeo.com/garthpaine/videos.  In this space, we have specifically been trying to address what it means conceptually to have interactive elements at the very core of the performance work.  We have developed a dramaturgical approach that uses pools of potentials, that is clearly defined aesthetic material, including consideration of set design, lighting, costumes, text etc. and then organizing these elements into collections which we referred to as pools. The interrelationships of the elements within the pool can then be manipulated and nuanced by the interactive data, in this case coming from the dancer, and it is possible for the dance the to navigate from pool to pool by generating specific data relationships.

One of the most interesting elements of this work is that it inverts typical relationship between music and choreography. In this case the choreography generates the music in real time. The dancer can choose to vary the  relationships from  night to night, or performance to performance, but then never simply bouncing to a preconstructed and fixed musical structure. Equally, the dramaturgical elements of the performance work remain fluid. The dancer can bring these together in different ways in each performance depending on how she feels about particular elements of the narrative she is telling on that particular evening, or for instance how she feels the audience are responding to the work. Such flexibility is common for instance in the work of the standup comedian, or the storyteller, but because of complex production requirements, has become a thing of the past  for major theater and dance productions.

I feel that the kind of sonic and visual material we generate by using real-time  data from the body is quite different in quality to that which we would generate in a studio. Interactive performance work appears to me to have a less cerebral and more visceral quality, which I think relates to the direct engagement with body, the kinesthetic and somatic relationships inherent in the dances craft, as distinct from the ability to sit back and abstract material when working in a studio.

What about the future? What you have today today in your professional wishing list?

My research currently has two focuses: 1)  the development of my composition and performance practice, and 2)  they continued exploration of the notions of agency and engagement in interactive performance systems, specifically in relation to dance.

I am in the process of building half a dozen singing bowl  robots, each will be driven through the analysis of frequency content in material that I will be playing on other singing bowls.  I use the symbolic sound, Kyma system for all of my real-time synthesis needs, which is absolutely brilliant. I will use the Kyma system to do  real-time frequency analysis of the bowls I am playing, and then use the energy in particular bins within that analysis to drive the robots. The singing bowl robots will be hung above the heads of the audience, so that the sound of the singing bowls is diffused through the space physically in addition to the multichannel loudspeaker arrays being used for processed sound. I expect also to use the singing bowl robots in other works such as perambulatory soundscapes, as they are all built to run on batteries and contain a long-distance wireless network card.

Some photos of the early prototype and a short video can be seen here. http://www.activatedspace.com/Events/files/96ec362a56101f9da7757c4c32650a1e-15.php

I am also involved in a collective, called «thinking through the body», which seeks to explore how very subtle, somatic  buddy states can be communicated through art.

We have had two exhibitions in the last year, one of which involved a work I developed utilizing motion capture to allow a human being, lying down on a massage bed, to be the interactive agent in a sonic landscape that reflected both their breathing pattern, and the extension of their torso,  which extends by a few millimeters as the subject relaxes. This work again sought to in the atypical «action» basis for interaction, by utilizing the natural functioning of the body is the interactive mechanism, thereby  asking participants to «do nothing» when they were in the installation except pay attention to their environment.  The lengthening of their torso caused the panning  of the soundscape to move from their feet towards their head, encouraging a transcendental reflection. The fact that the soundscape was created from the breathing patterns of the subject in the installation was extremely engaging for many people. It took a little while for them to perceive this relationship, something that came with relaxation, but when they did perceive it it became a very powerful mechanism for the expansion of their awareness of how their presence affected the larger environment.