EMS 09 -Ponencia- A theoretical and aesthetics approach to the study and practice of mixed electro-acoustic music: a pedagogical proposal.

I Autor: Claudio Lluán, Gabriel Data, Luis Tamagnini, Liliana Elechosa
Idioma: Inglés
Fecha de Publicación: 25/06/2009
Actividad en donde fue presentado: EMS 09. Herencia y futuro


This paper closes a cycle intended to encourage advanced wind instruments and composition students from the School of Music, Rosario National University, in the mixed media works practice.  As the result of our work, we developed a pedagogic method, which is, at present, in its experimental phase.  This method is currently being implemented in Max/MSP.

At the beginning of this research, theoretical fundamentals were set based on the following fields related to the contemporary composition techniques:

Sound object perception, problems related to durations and rhythmic grouping, use of instrumental extended techniques and work on musical notations.  It was in the course of the research (after the first practices), when the need arose to extend the theoretical framework and include other features, such as gesture and representation.

1.Theoretical Fundamentals

1.1. Gesture and Form
We start with the notion that music originates as a GESTURE in the composer’s mind. We will call it COMPOSITIONAL GESTURE.  The aim of the composer’s job is to give this gesture a physical shape.  In Instrumental Music, gesture is written on a score, while in Fixed Electroacoustic Music it is registered directly in some media.

In instrumental music (due to the performance) there is a necessary transfer of the gestural information contained in a score to other kinds of gestures:  a kinetic gesture and a sonic gesture.  The performer’s actions originate what we would call an INSTRUMENTAL GESTURE.

Finally, the listener perceives this MUSICAL GESTURE and again he/she makes their own deconstruction.  In short, the original composer’s MUSICAL GESTURE generates aesthetic subjective resonances in the chain composer/performer/listener.  In a live performance, the listener receives not only the pure sonic gesture (like in acousmatic situation) but also the performer’s kinetic aspect of the gesture.

A similar situation occurs among the chamber music players, where there is a permanent gestural interaction or feedback that includes other subtle interpretations of kinetic gesture, such as eye contact or breathing.

In electroacoustic music for mixed media, this feedback is not possible.  In this context, communication will be restricted to the player’s answer to sound gesture and his own experience.

Since there are no performers, the acousmatic situation does not allow the association between acoustic image and instrumental gesture.  When Pierre Schaeffer describes the attack and sustain criteria (À la recherche d’une musique concrète [French for ‘In Search of a Concrete Music’], Editions du Seuil, Paris 1952) he mentions several modes of instrumental actions: percussion, wind attack, rubbing, iteration, etc., bringing us closer to the physical aspect of the gesture, regardless of its concrete or electronic origin.

Figure 1. Gesture and Form

1.2. Organization in Time
But gesture is not independent of time: Every Gesture implies a MOVEMENT that develops in a time frame.  Thus, the musical gesture triggers a SONIC EVENT, and a group of these actions set the MUSICAL TIME, that is to say, the musical form in different levels.  We will call TIME UNIT to every definite and articulated sonic event.

Following Francisco Kröpfl´s terminology, we call SIMPLE TIME UNITS to those cases that include only isolated durations.

COMPOUND TIME UNITS have diverse rhythmic grouping types that can fall into three categories: Long sequences of simple units, Precision groups and Grace notes groups.

Figure 2. Categories of Time Units

In simple time units, the perception of the sound object is clearly imposed.  As Schaeffer shows it, our perception of a sound object duration depends on the content of that object.
“Musical duration has a direct relationship with the density of information”. Schaeffer, Pierre – Traité des Objects Musicaux, 1966.

For these cases we prefer to use the TIME MODULE concept.  As Kröpfl states, the time module of a sound is the “time that a sound requires to show or display all its distinctive features”.  Kröpfl, Francisco – Algunas reflexiones sobre la composición musical con medios electrónicos. Lulú, 3. Buenos Aires. 1992. pp. 33-34.

In compound time units, an entire rhythmic configuration is contained in the time module.

Last, in larger formal units, the time module can be the time assigned to the performer to perform one or more sonic events.

2. General Structure of Our Approach
A series of exercises were set in two books and organized under an increasing challenge criteria.  The object was dealing with a set of problems; some addressed to the performer, others to the composer who plays the electronic part (the electronic performer), and other problems of shared interest.

The first book is intended to the practice of the mixed media works with fixed electronic.  In this book, the wind player can practice without the intervention of the electronic performer. However, all exercises will allow some degree of random approach, both in the electronic part as in the way that the performer can read the score.

In the second book, the participation of an electronic performer is expected. The book is intended for the practice of the mixed media works with real time processing.

Each book is divided into several chapters, each of them containing different exercises.

2.1. Book 1 – Exercises for Flute and Fixed Electronics
Every chapter of this book has 3 exercises, each exercise consists of 7 entries: those in 1st, 3rd, 5th, and 7th place will always be simple time units of electronic origin, while the instrumentalist will insert three actions called A, B, and C.

In order to convey more challenge and musical sense to each exercise, the electronic sound objects will be chosen randomly by the system every time the exercise starts. However, the instrumental performer has the possibility of repeating the same sequence if he/she chooses to do so.

In this first book, we propose a set of exercises that make the instrumental performer aware of three initial time units: brief (three seconds), medium (five seconds), long (eight seconds).

Centering the control on the distinctive features of the sound object, the instrumental performer will develop the habit of taking up the time without counting, thus exercising the free rhythm possibilities.

“Dans le temps lisse, on occupe le temps sans le compter; dans le temps strié, on compte le temps pour l´occuper”.Boulez, Pierre. Penser la musique aujourd’hui. Editions Gonthier. Genève. 1964. p. 107.

2.1.1. The Interface
For these first chapters, we have designed one single interface made up of a series of modules.

Figure 3. The interface for the First Book


  1. The input module
  2. Electronics sounds for the chapter
  3. Exercise selector
  4. Time line viewer
  5. Score viewer
  6. Controls

At the top left corner of the screen, we can see the IN module.

There, a “bang” allows us choose the audio interface we want to use and its setup.

Then, we can turn on and off the audio input

Once the input is switch to on, we must move the slider to obtain the desired input level. By pressing the “0” button, we set an input level of 0dB, this means there is no gain or attenuation applied to the input signal.

The next step is to load the sound objects.

There are three types of it, each of them have ten versions integrated in the same file.

There are sounds objects that have 3 seconds of time module, others that have 5 seconds, and others, 8.

At the top center of the screen we can see the exercise selector.

For the exercise 1, we must then select the time module, 3, 5 or 8 seconds.

In this case, either the electronics sounds, or the performer’s interventions, have the same time module.

For exercises two and three, we must also generate the time modules sequence by pressing the bang button.

Each time we made this selection, the score changes according to the exercise.

Also, a time line allow us to view the time resulting modules in a proportional mode showing us the performer’s interventions indicated by a letter in a box.

As we can see, for the exercise two, the time modules of the electronic part, are different in time, but for the performer are the same.

In exercise three, the interventions of the electronic and the performer, have different time modules.

In cases 2 and 3, the time modules sequence is generated randomly.

When we have found an interesting sequence, we are ready to start the exercise.

By pressing the start button, the application chooses randomly the sound objects, according with each time module. This choice changes every time this button is pressed.

After 3 seconds, a green slider at the beginning of the time line shows a count of 5 seconds.

After that, the first sound object is played, while a gray slider moves as time passes. This will happen in every electronic intervention.

During performer’s interventions, no time progress is showed.

If we feel satisfied with the sound sequence, we can re-start the exercise by pressing the repeat button.

We can hear the result by pressing the listen button.

Finally we can save the result as an audio file for its evaluation.

2.1.2.The Exercises
In Chapter 1, the performer will concentrate on the perception of sound objects from their time module.  The macro-form self-generates by juxtaposition so that the instrumental performer must achieve the continuity of the musical speech.

As all through the first book, each score is presented as a matrix of three columns by three rows. Each cell represents the sound actions the instrumental performer will use for each of their interventions.

We suggest that the performer starts reading the first column from top to bottom. Later, he/she can read the second and third column and solve other variations of the exercise. Finally, the performer can make many versions, by selecting randomly three of the nine cells.

We suggest the wind player can first listen to the complete sequence of electronic sounds to become familiar with them and their sonorous qualities. Next, repeating the generated sequence, he/she will be able to make their own version. Finally, he/she will be able to listen to the complete version and assess it.

In the subsequent chapters, although the structure of exercises is basically the same as in Chapter 1, new problems are formulated:

Discontinuities. In Chapter 2, the times marked for instrumental interventions establish a time period in which the instrumental performers will have to perform their sonic event; in some cases the notation used allows the occurrence of silences.

Overlapping and timbre control. In some cases from Chapter 3, electronic sounds can overlap with instrumental sonic events, resulting in a basic polyphonic texture. Besides, based on instructions, the instrumental performers will have to choose their own way into the score, looking for a uniform or varying timbre.

Compound time units. The last chapter includes sequences of simple time units and grace notes groups.

Due to the many possibilities of solving the issues, the practice allows the player to cover a lot of cases with few repetitions.

2.2 Book 2 – Exercises for Flute and Electronics with Real Time Processing
In contrast to the Book 1, where the electronic material was mixed with the instrumental interventions resulting on simple textures, here the overlapping of acoustic and electronic sources will be an important factor, demanding a more careful listening to be able to work through the exercises.

This book demands the participation of an electronic performer. Therefore, the exercises in this book will have three modalities of performance:

i. Only for the instrumental performer.
ii. Only for the electronic performer.
iii. For both musicians.

2.2.1. The Interface
The interface designed for the exercises is different from that of Book 1.  It has been developed to include all the controls needed to carry out processes in real time.

Figure 4. The interface for the Second Book

A specially designed module will do a constant reading of the input signal and will derive a series of parameters.

To produce electronic materials and instrumental signal processing, the decision was taken to turn to the following modules: a reverb unit, a delay line, a ring modulator that will play on the signal captured by the microphone and three simple frequency modulation units.

These processors and sound synthesis methods were chosen because they are resources of huge historical value relevant to the educational aim of the current work.

2.2.2. The exercises
The result aimed in the exercises of Chapter 5 is related to the idea of smooth time crowded with simple sounds or closed strata with short intervals between the flute and the processing sounds, and where the sound material qualities are prioritized with the appearance of beating, bumpy textures and soft entries and cuts.  The scores of these exercises use different kinds of the so-called analogical or spatial notation that are appropriate for free rhythm fields.  The instrumentalist and the electronics performer will be encouraged to address these exercises bearing in mind the idea of playing with the sound materials, without being compelled to measure the exact time.

In Chapter 6, the use of randomization is emphasized, proposing different graphic solutions aimed to elicit on the performer the most convenient decisions related to the sound context.  In this sense, a greater attention will be demanded from musicians to adjust different interactive processes.  The remaining possibilities that offer the set of processing modules will be tapped.

The last Chapter outlines instances, which will require more skills on both interpretation and performance of electronic processing.  Some exercises include the traditional rhythmic notation and the use of extended techniques is newly emphasized.  Musicians will be demanded more consistency on the sound speech and more accuracy on tuning and rhythmic aspects.

3. Conclusions
This method is currently open for review and inclusion of issues that may arise from practice.

From the experience we gathered in these first testing situations, we noticed a genuine interest in the students carrying out these practices, made evident by their increasing commitment to the fulfilling of proposed tasks.

We found a clear need to go deeper into related subjects, such as issues and variants of music representation, and into other fields as gesture, a key concern in music with real-time processes.

BOULEZ, Pierre, Penser la musique aujourd’hui, first edition, Genève, Éditions Gonthier, 1964, 173 pages.
BOULEZ, Pierre, La escritura del gesto, first edition, Barcelona, Ed. Gedisa., 2003, 174 pages.
DICK, Robert, Tone development through extended techniques, revised edition, New York, Multiple Breath Music Company, 1986, 60 pages.
LEVINE, Carin, Mitropoulos-Bott, Christina, La flauta-posibilidades técnicas, first edition, Huelva, Idea Books, 2005, 159 pages.
NUÑEZ, Adolfo, Informática y electrónica musical, first edition, Madrid, Ed. Paraninfo S.A., 1992, 272 pages.
KRÖPFL, Francisco, “Algunas reflexiones sobre la composición musical con medios electrónicos”, pages 33-34, in “Lulú”, no. 3, 1992.
SAD, Jorge, “Apuntes para una semiología del gesto y la interacción musical”, pages 63-71, in Cuadernos del Centro de estudios en Diseño y Comunicación (Ensayos), no. 20, 2006.
SCHAEFFER, Pierre, Tratado de los objetos musicales, first edition, Madrid, Alianza Editorial, 1988, 337 pages.
SCHAEFFER, Pierre, ¿Qué es la música concreta?, first edition, Buenos Aires, Editorial Nueva Visión, 1959, 109 pages.
STOCKHAUSEN, Karlheinz, “…how time passes…”, pages 10-40, in Die Reihe, no. 3, 1959.
STONE, Kurt,  Music Notation in the Twentieth Century, New York, W.W.Norton & Company, 1980.
PUCKETTE, Miller, Max reference manual, Cycling´74, Paris, 1988.
PUCKETTE, Miller y otros.  Real-time audio analysis tools for Pd and MSP, reprinted from Proceedings,  ICMC, 1998.

Claudio Lluán, Gabriel Data, Luis Tamagnini and Liliana Elechosa

Escuela de Música – Universidad Nacional de Rosario
clluan@unr.edu.ar – gdata@unr.edu.ar