Composer, author Joel Chadabe is a pioneer in the development of interactive music systems. His music has been performed in New York, Paris, Tokyo, Buenos Aires, Venice, Rome, Amsterdam, Berlin, Linz, Stockholm, San Francisco, London, and other cities worldwide; and recorded on EMF Media, Deep Listening, and other labels. He is the author of 'Electric Sound', a history of electronic music. His articles have been published in leading journals. As president of Intelligent Music, he oversaw the first publications of interactive music software. He has received grants from NEA, New York State Council on the Arts, Ford Foundation, Rockefeller Foundation, Fulbright Commission, and other organizations, and he is the recipient of the SEAMUS 2007 Lifetime Achievement Award.
Mr. Chadabe is currently Professor Emeritus at State University of New York, faculty at Manhattan School of Music, visiting faculty at NYU; and president of Electronic Music Foundation.
The following interview was developed through an email exchange between Joel Chadabe and Ricardo Dal Farra on August and September, 2009.
In a few words, it's the empowerment of individual creativity. There is continually more and more powerful and inexpensive software and hardware available, most of it with a short learning curve. Our role as professionals, then, is changing. We are more and more playing to a public of practitioners and our role now is to inspire sophisticated musical thinking by creating and maintaining an international community and information exchange. That is what Electronic Music Foundation is doing.
When I decided to write the book, in 1993, electronic music was ubiquitous but no one knew where it came from. I thought that we better tell the story before all of the pioneers died. So I decided to write a book on the history of electronic music and I made a point of using primary sources, which means talking to the people that made the history and describing their motivations and methods. Almost all of them, in one way or another, were friends or colleagues; and I had been a very active pioneer myself, so it was easy to call them up and go and visit them. It was interesting to ask them questions because I knew a lot about what they had done. And I could tell part of my own story, which also gave me a great deal of satisfaction.
It was a revolution in musical thought, opening up music to all sounds, and introducing new ways of thinking about music and performance. It's impossible to make generalities because there were so many different personalities on the scene. But taken together, the new ideas have pointed music in a completely new direction.
The TouchSurface was developed by David Asher, engineer and partner in Intelligent Music, an R&D company I co-founded with Tom Bezanson, a lawyer friend, in the mid-80s. We had also developed M and Jam Factory, and in fact quite a bit of other software, and we did a lot of work in bringing Max to the market. But in about 1990, we decided to focus on the TouchSurface because we needed to make money and we believed that touchpads would be the pointing devices of choice for laptops in the future. And of course, we were right. The TouchSurface was unusual in that it was a touch-sensitive pad that reacted not only to position but also to pressure. One of our models was about 2 x 3 inches. You could touch it with two fingers, and by changing position and pressure between the fingers you could send the cursor zipping around the screen. I thought it was great. We sold it to Stratos Product Development Corporation, but ultimately they didn't do the necessary final design work and it never came to market.
I began teaching at the State University of New York at Albany in 1965. At that time, the University was growing and research and innovation were strongly supported. I was asked to set up an electronic music studio, so following Alvin Lucier's advice, I drove from Albany to Trumansburg to meet Robert Moog and buy a Synthesizer for the university's studio. We became friends very quickly, and I enlarged the synthesizer the next year. And then, in 1967, I got the idea for an analog programmable studio system. I wrote an article about it in Perspectives of New Music and secured a grant to order it from Bob. It was delivered in December 1969. It had several voltage-controlled oscillators, filters, envelope generators, and so on, but it also had eight sequencers that could run together or independently or in any relationship, and any of the sequencers could be timed by a programmable clock. The power of the system was its ability to automate multiple variables of a complex musical process. After working with the system for a few months, in May 1970 I composed 'Ideas of Movement at Bolton Landing', which is the first example of an interactive instrument. It was interactive in the sense that aspects of the composition were controlled by pseudo-random processes with which I interacted in performance.
When I started to work with the first Moog systems in 1966, I realized that this was the musical frontier with an enormous new territory to explore. I saw the many things that could and should be done. And so I started to do them. It was a question of finding a way through a new world of possibilities. And its benefits to humans gradually became more clear to me. It became clear, for example, that interactive instruments could create environments for creativity that could benefit not only composers at a professioanl level of work but also amateurs at a beginner's level. Working with an interactive instrument turns composition into an activity of discovery and learning. It was very exciting for me to begin to understand the implications of this. I had worked with John Cage on several occasions, and I felt post-Cageian in directing randomness towards a goal. I was composing a new way of composing.
I was fortunate in being able to purchase the first Synclavier in November 1977. But it was not an off-the-shelf normal system. I had ordered it to be configured for custom-designed controllers because my whole idea in working with it was to develop ideas for new interactive instruments. Solo, which I composed in 1978, was the first. My concept for Solo was 'conducting' an improvising orchestra. For the conducting device, I asked Bob Moog to build two theremin controllers for me. For the improvising 'orchestra', I wrote software that was based on a clarinet melody performed by J. D. Parran in a concert in New York and arranged for the eight voices of the Synclavier. I interacted with the melody and its arrangements by moving my hands relative to the antennae. So in performance, my right hand moving to the right antenna controlled which of the voices was playing, and my left hand moving towards the left antenna controlled the tempo from very fast to very slow.
By about 1986, I began to use the Macintosh and a so-called Yamaha MIDI Rack, a group of eight keyboard-less DX7 modules connected in a single rack, and I grouped many of the pieces I did in one collection called After Some Songs, to be performed by me with the electronics and with Jan Williams playing marimba, vibraphone, and various percussion. Earlier in my life, I had played jazz piano, and so I knew a lot of the standards. Many Mornings Many Moods, composed in 1988, based on Duke Ellington's 'In A Sentimental Mood', was composed for any combination of instruments between full orchestra with solo percussion and electronics to just solo percussion and electronics.
Both Solo and Many Mornings ... were milestones along the road I took as a composer. They reflected different approaches to sound, performance, and audience. One World 1, composed in 2006, however, is more of a second stream for me than a change in style. I still compose for instruments and interactive electronics, but in 2006 I realized that climate change, habitat destruction, and other environmental problems were among the most serious problems the world faces, and I asked myself: Is there anything we can do as musicians that will heighten public awareness of environmental issues? I formed Ear to the Earth as a program of Electronic Music Foundation and we produced our first Ear to the Earth festival in 2006. Those thoughts triggered a new compositional direction for me in two respects: first, I found myself more continually aware of the sounds of the environment, and consequently the environment itself, and second, I found that the use of field recordings of environmental sounds as the material for compositions was musically fascinating. One World 1, which is based on sounds from New York City and New Delhi, was my first composition along these lines.
I compose this way: First, I think through an idea for the piece I'm about to write. I then build an interactive instrument in software, with some processes automated and some performed, to realize the idea. As I build the instrument, I perform with it in my studio, and that ongoing performance is in fact a process of discovery and learning. It is through that process that the idea for my piece becomes more refined and its realization more to the point. When I feel that the nature of the interaction and the sounds produced by the instrument are perfect, the piece is finished.
EMF has filled and continues to fill a need. Our first mission was to disseminate historical information about electronic music. Ten years later, when the history of electronic music was better known, our mission became the exploration of the creative and cultural potential in using electronics in music. By producing events, publishing CDs and eventually books, and by disseminating information and materials via the internet, we are creating a stimulating environment on a global level, and through this approach, we are exploring the potential of electronics and music. I would like to see EMF more complete, more stable financially, and more visable, all of which would enable to better achieve our mission.
I'm very focused on EMF these days, but the general paths of my work remain pretty much the same. I work with acoustic instruments and interactive electronics. And I work with environmental sounds.