Technology, Musical Perception, and the Composer

Technology and Perception
J. TIMOTHY KOLOSICK

The development of computer-controlled electronic sound sources has inspired researchers and developers to produce a wide array of tools for music research and instruction. Although I have not taken part in any formal, scientific perception studies, I have used this equipment in developing computer software for aural skills training. The two principal journals for perception research are Music Perception and Psychomusicology. For a summary of techniques and applications of computer technology to perception research, read: Psychomusicology, Volume 8, Fall, 1989, guest edited by Peter Webster.

Pioneers in any area of human endeavor are often held in high esteem for their ground-breaking efforts. However, 20-20 hindsight allows us to see the shortcomings of their works in light of new developments. Pioneers in Music CAI took specific tasks and instructional units and automated them. They asked the question, "What if the students had unlimited access to assisted drill and practice? Surely they would improve." This was a relevant question at the time. Unfortunately, there has been little musical improvement overall because of such software innovations, and academics have started to ask new questions.

Why didn't the students achieve like we thought they would? We gave them every opportunity to practice. All they had to do was go to the lab. Many researchers began to question the validity of computers as learning tools. We have investigated and improved every aspect of the technology and still have the problem that few students use the lab unless required, and few seniors can use effectively the theory and ear training skills they learned as sophomores. Incoming graduate students often complain that an entrance exam was difficult because they haven't done theory in a long time. Further investigation reveals that such students have not been secretaries or construction workers for that time but have been teaching music privately or in the public schools. Their learning of musical scores would surely demand some understanding of music theory but most often they learn ensemble scores "along with the kids" or by some other means.

Why are academic theory skills perceived as unrelated to musical performance? The early attempts at automated learning should have shown us clearly that theory and ear training are often taught in an atmosphere devoid of musical performance. Much CAI software has automated the process of teaching musical elements out of their musical context and helped to convince many undergraduates that theory has nothing to do with music. Indeed, such theory doesn't, and such software is the mirror image of our worst teaching.

The problems in the early days were two-fold:

  1. The quality of the sound was not really very good. It was adequate, mind you, but an eight-bit DAC would not provide the students with a real-world listening experience. Many of these sound problems have been corrected by low-cost, sophisticated MIDI instruments but there is still much work to be done in this area.
  2. The musical stimuli that we used were those used previously in ear training courses. These short bursts of sounds, devoid of musical context, were no more inspiring to developing aural skills on a computer than they were in the classroom with a teacher and a piano. In short, we had accomplished the computer automation of the most boring part of our theory curriculum.

 

Even today, much computerized ear training is no more than automated student guessing of synthetic burps of sound devoid of musical context. Such element and short-example recognition is boring, tedious, and perhaps irrelevant. Studies in ear training are only relevant when placed in a context of musical performance inside and outside the theory classroom.

Perception research is also guilty of limiting the stimulus to such a great extent that the examples are simple sound bites and not musical. Can we fully trust the results of such studies? I dare say we should doubt our ability to develop good aural skills with the stimuli we give our students.

Theory professors often complain, "My students never prepare for their ear training class!" They might also ask themselves, "Do we prepare for our ear training class? Are we learning to hear better every day? Are we actively involved in our students' world of listening? Do we talk to them of interesting chords found in popular Rock tunes as well as the standard repertoire? In short, do we demonstrate that we use our listening skills on a daily basis?"

In music theory and its application to the real musical world, we are the role models. Our students will imitate our actions and very often ignore our words. Teachers with good basic musical skills will find it much easier to get their students to develop good basic musical skills. If the teacher can't do it, why should the student bother? It's obviously not terribly important. The professor hasn't taken the time to get it right either.

Teaching voice leading, ear training, and analysis has been part of theory instruction for at least a century and a half. Although computers have shown us how dry and unmusical our teaching is at times, we mustn't "shoot the messenger" and, instead, reevaluate our approach and give the computer new messages for our students based on new models of instruction.

For years, two products, the TAP Master and the Pitch Master (Temporal Acuity Products), have led the way in context-based skill development in rhythm performance and singing. Now, microcomputer technology is changing at an astounding pace and academics are developing new software to take advantage of these changes. The newer technologies allow and demand new types of interaction and innovative theory software is on the horizon. Several instructors are working on the problem of ear training in musical contexts and practical, applicable analysis skills. My new fundamentals book/software combination called Explorations includes the first computer software that allows students to explore musical structures, to play around with notes, if you will, while receiving pedagogical information from the computer. Scott McCormick's (Berklee College of Music) CD ear training lessons and Richard Ashley's (Northwestern University) work with form and context with audio compact discs also allows this type of exploration, developing the student's melodic and harmonic listening skills while the student builds a strong formal concept of a musical composition. The fourth edition of Bruce Benward's (University of Wisconsin) Ear Training-A Technique for Listening, includes a transcription tape for students to transcribe 33 graded musical compositions. Chuck Lord and Kate Covington (University of Kentucky) are using sequencers for musical transcription, a technique which improves every jazzer's ear. John Schafer's (University of Wisconsin) Harmony Tutor, Tom Kirshbaum's (Northern Arizona University) Hey Man, It Goes Like This, and Timothy Koozin's (University of North Dakota) dictation materials based on models and their embellishments are all examples of how theory instructors are trying to place musical training in a performance environment.

My philosophy remains the same: do all you can to let the students learn. Don't provide all the data. Keep them exploring and discovering. Use computer software that allows the student to apply skills as well as to drill and practice them. You, the human link to the subject matter, should provide inspiration, a role model, practical applications of the material studied, and sources of information.

 

Uses of Computer Technology in Piano Performance Research
SANG-HIE LEE

As an ordinary but curious piano teacher, I have become very interested in studying some detailed information about piano playing, and I have discovered the capability of computer technology. In May of this year, a colleague and I were driving from Ann Arbor to Columbus, Ohio to give a session on piano performance, theory, and computer analysis using the Bösendorfer 290 SE (computerized Imperial Grand Piano) during the Third Annual Music Theory Midwest Conference. My colleague said, "You know that very painfully beautiful and tense moment in the development section where you have a sequence of diminished seventh chords; can you hold that last chord a split second longer?" I said, "Yes." The computer data of the performance showed that indeed, the diminished seventh chord was held a few milliseconds longer. I, the performer was quite unaware of it, but was able to manipulate the temporal adjustment almost intuitively. My colleague, the theorist perceived the small temporal dispersion. Caroline Palmer (1989), in her study of six skilled pianists' musical intentions and temporal asynchrony in chord playing, has learned that the difference in chord attacks varied by a mere 7 milliseconds.

While technology is progressing at a dizzying pace-almost too quick for us to digest-I am continually in awe of the magnitude of human performance and the micro dimension of human perception. As Otto Luening alluded earlier in this morning's keynote speech, we face many issues regarding science versus arts in the music discipline. My thoughts are that perhaps science and the arts are not at odds; that they may be complementary to one another; that learning about the arts through science may be quite possible.

In this brief paper, I will address the nature of piano performance from the historical perspective; by way of literature review, I will share with you information on some computer and performance research; I will share with you some of the problems, issues, and questions that I have grappled with over the years; and I would like to hear some of your thoughts and suggestions.

Deciphering detailed information of piano playing has always been perceived as something akin to "mystery," despite the fact that excellent documents abound in the literature.

As early as in 1753, C.P.E.Bach, in his well-accepted and widely-circulated publication Versuch über die wahre Art das Clavier zu spielen, recognized that, in pianoforte playing, everything about tone production depended solely on force and duration. Otto Ortmann, in his monumental 1929 publication Physiological Mechanics of Piano Technique, noted that all qualitative differences of tone are results of differences in hammer speed which result from key speed. Joseph Gat, in his 1958 book The Technique of Piano Playing, also resolved that the tone volume or intensity depends solely on the speed of the hammer strike on steel. The number of overtones, which are the primary determinants of tone quality, increases in proportion to hammer speed.

Tone quality, depends, further, on various mechanical and acoustical factors; coating and quality of hammer felt; length of time the hammer was in contact with the string, quality of the steel in the strings; shapes of the sound board, bridge, iron frame, and piano case; keybedding noise; the room acoustics; and, of course, the listener. It is easy to see that there would be a large gap between the simple physics of key stroke and the resultant perceived tone, hence the "mystery." But the fact remains that the ultimate piano tone, no matter how intricate its aesthetic quality, is controlled by the pianist's touch that varies only in dynamic gradation, and timing of attack and release.

The earliest published record, to my knowledge, of obtaining quantitative information on the piano playing mechanism was attempted by Ortmann (1929). His painstakingly documented results are undisputed to this day. But his ingenious home-made instruments are archaic by today's standards. An example of the apparatus that he used to detect key movements was five strips of aluminum, each of which was attached at one end to five writing levers; the other ends of these levers touched the surface of a revolving drum which moved horizontally while the levers moved vertically, and key speed was measured by the amount of deflection from the vertical (p. 230).

The wake of the computer era has touched even some of the more "resistant-to-technology" pianists among us. Many college and university class-piano studios are now equipped with MIDI-capable electronic pianos, and some private piano teachers use MIDI-equipped electronic keyboards as supplementary teaching stations.

During the past two decades, music researchers have used computer technology to gather quantitative information on piano playing, specifically on loudness and timing. Christoph Wagner and his colleagues of Hanover, West Germany, published an article in 1973. They had connected a Bechstein grand piano to a mainframe Honeywell DDP-516 computer that detected timing information in increments as small as 1 millisecond. Visual feedback of this information provided a self-monitoring device for programmed instruction. Piano students played short passages, the temporal information of which appeared on the small screen. The mean tempo played and the dispersion of the individual notes were shown on the screen. The students could continue practicing while adjusting the temporal deviations. L. Henry Shaffer, a British music psychology researcher, published a study in 1981, which also used a Bechstein grand piano interfaced with a mainframe computer that examined the nature of motor independence of the hands and timing of skilled performance. He found that skilled pianists handle independence of the hands in a number of dimensions, such as movement patterns, different fingering logistics, different articulations, dynamics, polyrhythms, and freedom to allow fluctuation of tempo.

In the United States, Caroline Palmer published two articles in 1989. In her first study, she used a MIDI-equipped Yamaha electronic KS 88 and IBM personal computer to examine asynchrony of attack in chord playing. She measured six pianists' chord playing and compared the asynchrony of chord attack between the intentionally unmusical and musical performances. The average temporal asynchrony of the unmusical performances was 11 milliseconds and the musical playing was 18 milliseconds. In the second article, she used the acoustic Bösendorfer 290 SE, connected through MIDI to a host computer, to study individual pianist's phrasing by looking at the temporal variations of the striking and releasing of notes. Also in 1989, Salmon and Newmark published an article in which they applied a Yamaha PF-80 and Macintosh Plus computer to analyze temporal and dynamic information of piano playing. They saw a potential use of this MIDI analysis methodology in assessing the impact of pianists' hand injuries. I published two studies (1989, 1990) that examined performances of particular technical examples by skilled pianists, in both cases using electronic keyboard and MIDI connected personal computers. The 1989 study was particularly focused on examining the effects of four feedback modes in playing four different sequences of left-hand leaps. The skilled pianist's performances of the large leap between black and white key were much better than white-to-white key performances with or without the visual and audio feedback. Audio feedback was not of particular help but visual feedback was very important in playing accurately and evenly. The 1990 study compared the physiological make-up of thirteen pianists with their dynamic and temporal control data in playing two technical examples. Of the eight biomechanical variables studied, wrist mobility and hand weight were two physical features that influenced performance in playing two sample techniques.

For years, two products, the TAP Master and the Pitch Master (Temporal Acuity Products), have led the way in context-based skill development in rhythm performance and singing. Now, microcomputer technology is changing at an astounding pace and academics are developing new software to take advantage of these changes. The newer technologies allow and demand new types of interaction and innovative theory software is on the horizon. Several instructors are working on the problem of ear training in musical contexts and practical, applicable analysis skills. My new fundamentals book/software combination called Explorations includes the first computer software that allows students to explore musical structures, to play around with notes, if you will, while receiving pedagogical information from the computer. Scott McCormick's (Berklee College of Music) CD ear training lessons and Richard Ashley's (Northwestern University) work with form and context with audio compact discs also allows this type of exploration, developing the student's melodic and harmonic listening skills while the student builds a strong formal concept of a musical composition. The fourth edition of Bruce Benward's (University of Wisconsin) Ear Training-A Technique for Listening, includes a transcription tape for students to transcribe 33 graded musical compositions. Chuck Lord and Kate Covington (University of Kentucky) are using sequencers for musical transcription, a technique which improves every jazzer's ear. John Schafer's (University of Wisconsin) Harmony Tutor, Tom Kirshbaum's (Northern Arizona University) Hey Man, It Goes Like This, and Timothy Koozin's (University of North Dakota) dictation materials based on models and their embellishments are all examples of how theory instructors are trying to place musical training in a performance environment.

More recently, George Moore (1992) published a study of trills performance by using a MIDI-equipped Yamaha Disklavier. Thomas Brotz (1992) used a full-sized Roland HP-450 electronic piano connected to an Apple IIe computer using Roland MUSE software to study timing accuracy and tempo of 60 children's performance. He found that as the playing tasks became more complex, timing accuracy became poorer. Another study, currently in review, studied the impact of physiological impairment on piano playing: the results could be used to verify physical damage.

These studies indicate that the availability of MIDI may be truly "serendipity" for performers, teachers, and researchers. Use of MIDI technology in piano research seems to have a threefold application: (1) learning objectively the nature of skilled performance and improving perceptual sensitivity to a smaller degree of deviation; (2) providing quantitative and visual feedback that can help in improving and teaching performance skills; and (3) detecting physiologically correct or incorrect, healthy or impaired movements by studying the impact of such movements on performance for pedagogical as well as medically diagnostic purposes.

As researchers learn more through MIDI research, a variety of questions will continue to emerge. The questions that I have grappled with range from philosophical to practical concerns. For example, "Can music performance be reduced to computer algorithm?" "Can the computer measure differences between, say, the playing of Lisztian leggiero and Mozartian crisp passages?" What about the use and effects of pedalling?" "How useful is the electronic piano in comparison with the acoustic piano?" I hope that this brief introduction to the use of MIDI for performance and performance research will invite other questions. Then perhaps we can explore some other possibilities.

References:

Bach, C. P. E. Essays on the True Art of Playing the Keyboard Instruments. Translated by W. J. Mitchell, original work Ein Versuch über die wahre Art das Clavier zu Spielen, 1753. New York: W. W. Norton & Company, Inc., 1949.
Brotz, Thomas. "Key-finding, Fingering, and Timing in Piano Performance of Children." Psychology of Music 20 (1992): 42-56.
Gat, Joseph. The Technique of Piano Playing. Budapest: Corvina, 1958.
Lee, Sang-Hie. "Using the Personal Computer to Analyze Piano Performance." Psychomusicology 8:2 (1989): 143-149.
______ "Pianists' Hand Ergonomics and Touch Control." Medical Problems of Performing Artists 5 (1990): 72-78.
Moore, George P. "Piano Trills." Music Perception 9:3 Spring (1992): 351-360.
Ortmann, Otto. The Physiological Mechanics of Piano Technique. London: Kegan Paul, Trench, Trubner and Co., Ltd., 1929; reprint, New York: EP Dutton and Co., Inc., 1962; reprint, New York: Da Capo Press, 1984.
Palmer, Caroline. "Mapping Musical Thought to Musical Performance." Journal of Experimental Psychology: Human Perception and Performance 15:12 (1989): 331- 346.
______ "Computer Graphics in Music Performance Research." Behavior Research Methods, Instruments and Computers 21:2 (1989): 265-270.
Salmon, Paul and Newmark, J. "Clinical Applications of MIDI Technology." Medical Problems of Performing Artists 4 (1989): 25-31.
Shaffer, L. H. "Performance of Chopin, Bach, and Bartok: Studies in Motor Programming." Cognitive Psychology 13 (1981): 326-376.
Wagner, C. Pointek, E., and Teckhaus, L. "Piano Learning and Programmed Instruc tion." Journal of Research in Music Education 2 (1973): 107-122.

 

Digital Music, Digital Stimuli
JOHN R. PIERCE

The most important thing about digital waveform generation, for music or for research on perception, is the reproducibility that it assures. Analog synthesizers drift out of adjustment. In digital synthesis, good or bad, useful or useless, what you put in determines what comes out. I was struck by this feature from the very first.

Sometime in 1956 Max Mathews and I attended an exasperating concert at Drew University in New Jersey. After the concert, one or the other said, "The computer can do better than this." Max wrote a compiler to produce musical sounds. On or about May 17, 1957 it was used to play In the Silver Scale, a composition by Newman Guttman. Later I heard a second piece by Guttman, Pitch Variations. I wondered whether the sounds were so strange because of the computer, or because of Guttman. I wrote a short, conventional piece, Stochatta, which the computer played. The sound was different. What came out was affected only by what went in. I was struck by that observation.

In 1959 Mathews and Guttman presented a paper, "Generation of Music by a Digital Computer," at the third International Congress on Acoustics at Stuttgart; this appeared in the Proceedings, published by Elsevier. On October 5, 1959 I gave a talk entitled "The Computer as a Musical Instrument" at the New York Convention of the Audio Engineering Society. I followed this with a letter to the editor of the Journal of the Audio Engineering Society, published in the April, 1960 issue.

In the May, 1961 issue of the Bell System Technical Journal Max Mathews published an 18-page paper entitled "An Acoustic Compiler for Music and Psychological Stimuli." The inclusion of psychological stimuli in the title is important, for a good deal of the work done by Max and his colleagues was psychoacoustic in nature. Max published another detailed paper, "The Digital Computer as a Musical Instrument" in Science, November 1, 1963. These papers created a widespread interest in the generation of waveforms by digital computers, for musical or research purposes. Also of crucial importance was Max's book, The Technology of Computer Music, published by the MIT Press in 1969.

The work of John Chowning played a crucial part in the digital generation of musical sounds. Chowning first learned about Max's work in 1964, while he was a graduate student at Stanford. He visited Bell Labs. On returning to Stanford he managed to run a derivative of Max's Music IV program at Stanford. Since 1966 he has taught a computer music course at Stanford.

Chowning's invention of FM synthesis was crucial in the development of digital sound synthesis. It led to the production of the first all-digital keyboard, the Yamaha DX7, which reached the market in 1983. The price of the DX7 was about a tenth that of other keyboard synthesizers. A flood of powerful and inexpensive digital keyboards and other digital synthesizing gear followed. The use of special chips and economical techniques made digital synthesis practical for any ends. Together with that essential standard, MIDI, and the language MAX, digital synthesis came to be used by many composers and investigators who are not deep in computer hardware or software.

Now-many years since 1967 when a computer was first used to synthesize complex sounds-digital synthesis has become an irreplaceable tool in both music making and psychoacoustic research. Investigators are no longer constrained to use sine waves, short pulses, or square waves as stimuli. They can generate tones that mimic the sounds of acoustical instruments, and learn what the essential characteristics of timbre are. They can produce Shepard tones which give an illusion of ever-rising or ever-descending pitch. Diana Deutsch has used pitch ambiguities of Shepard tones in important investigations that I will not recount here. Others, including Jean-Claude Risset, have carried out crucial experiments in tracking down properties of waveforms that are essential to known musical timbres.

Digital synthesis has allowed us to tailor sounds exactly to the aspects of perception that we wish to investigate, omitting one or more characteristics of natural sound in seeking crucial differences. This is all because the outputs of computers and digital synthesizers are repeatedly and always just what the digital input specifies-and because mass production for musical purposes has made digital synthesis cheap and convenient.

 

As Technology Becomes Invisible
JON APPLETON

One of the great achievements of those pioneering composers who, forty years ago, presented their electroacoustic music in concert was their ability to express their love of music in a radically new medium. Whereas musique concrete was born from radio plays and elektronische Musik from a sound laboratory, our American colleagues were simply composers searching for new sonic resources made possible by technology. Otto Luening and Vladimir Ussachevsky did not seek to explain their work in terms of science but rather as an art. The same was true of other American electroacoustic pioneers like John Cage, Les Paul, and Louis and Bebee Baron. Because these individuals saw technology as a means to an end, the machines used to make their music were largely invisible to their audiences.

By the mid-1960s attitudes had changed. Charles Hamm has documented America's love affair with science and technology. With the development of integrated circuits, modular synthesizers, and the first methods for digital synthesis, many composers of electroacoustic music like to think of themselves as scientists. For example, in 1968 Prof. Dr.-Ing. Fritz Winckel of the Lehrgebiet Kommunikationswissenschaft at the Technische Universität Berlin organized one of the first international "scientific" congresses for practitioners of electroacoustic music. In 1975 Princeton University inaugurated the Godfrey Winham Laboratory at its engineering school for experiments in computer music.

Eighteen years ago at Michigan State University, David Wessel organized the first Music Computation Conference which led, a year later, to the formation of the Computer Music Association (now the International Computer Music Association) and annual conferences which continue today. During the first decade, conferees spent most of their energy developing new hardware and software. Consequently there was pitifully little music of substance to be heard. I call this period "BATT" or "boys and their toys" because music was rarely discussed, boys endlessly compared the sizes of their machines and hardly a woman was heard from during these years.

It was in 1981 when I first performed music I had composed for a portable, real-time, digital synthesizer I had helped develop, that I first encountered the pseudo-scientific snobbery that used to characterize the computer music community. The Synclavier, the musical instrument in question, was a commercial product with an easy to use and versatile set of controls for modifying sounds and recording sequences. "Much too limited to be of interest to serious computer music musicians," I was told by composers who had themselves largely abandoned composition in favor of the search for ever larger tool boxes. Ultimately my belief in the beauty of electroacoustic music led me to an international community of like-minded composers and together we formed the International Confederation for Electroacoustic Music, an International Music Council/UNESCO affiliate. In these circles, initially organized by the Groupe de Musique Experimentale de Bourges, the sharing of creative work-newly composed music-was the central purpose. Later, my experience in Europe led me to join Barry Schrader and others in forming the Society for ElectroAcoustic Music in the United States (SEAMUS) which in April [1993] will hold its ninth national meeting in Austin, Texas. When SEAMUS selected Otto Luening as the recipient of its annual award in 1990, I was suddenly made aware of how large and diverse the electroacoustic music community has become. Compact discs of electroacoustic music abound, there are many talented women now in the field, and most notably, the development of new hardware is no longer the obsession it once was. Thanks to the availability of powerful but relatively inexpensive computers, and intuitive languages like MAX, one can observe a shift in the computer music community to the discussion of musical ideas. Stephen Travis Pope, the new editor of the Computer Music Journal, has among others, been behind this reformation. At the 1992 International Computer Music Conference held in San Jose, California earlier this month, composer David Cope chaired a well-attended panel discussion on algorithmic composition which-for the first time in my memory-tried to address some of the underlying aesthetic issues previous conferences had so studiously avoided in years past.

Electroacoustic music deserves the distinction of being the first music in which the means of sound production were invisible. When I began composing electroacoustic music in 1963, I rejected the generally held opinion that this was a music for specialists. If audiences could see how this music was made, I reasoned, they would develop a taste for this new genre. In 1964, while living in Oregon, I asked the Rockefeller Foundation to support the development of a performance instrument that resembles a Mellotron in the belief that electroacoustic music needed live performance to reach a potentially larger audience. The proposal was not funded and it was ten years before I found a sympathetic ear. Together with Sydney Alonso, I visited Max Mathews at the Bell Telephone Laboratories in 1974 and described to him my ideal machine, one that could perform the kind of musique concrete I was then composing. He urged us to start with synthesis by means of special purpose hardware which Alonso designed. The Synclavier, in its various manifestations, served the needs of composers and performers for over a decade. I wish I could say that my efforts with the Synclavier changed the course of electroacoustic music even in some small ways. But there have been larger cultural forces which are reshaping our musical world. Three of these come to mind: (1) the retreat from the avant-garde, (2) the decline of the concert tradition in America, and (3) the unexpected success of electroacoustic music on compact disc.

Frankly, I do not believe that audiences care how electroacoustic music is made if it provides pleasure. Even the musical avant-garde has declared itself independent from the scientific community and now seeks to justify its existence as an extension of the jazz and rock community. With the exception of a small group of dedicated listeners in the art music world, most people prefer to hear the music they like on the radio or on compact disc. Except for spectacles like opera or rock concerts which contain a theatrical element, concerts are disappearing. I cannot substantiate my claim that the compact disc has made electroacoustic music more popular. For the last quarter of a century I have spent many hours discussing the musical tastes of eighteen- to twenty-two-year old college students. In the last two years I have been surprised by the catholicity of their musical tastes. For them technology is invisible because they cannot imagine listening to music for any other reasons besides aesthetic or kinetic ones. My sanguine view is not shared by many of my colleagues. The dean of Swedish electroacoustic music composers, Lars-Gunnar Bodin, states in a forthcoming article that "the role of contemporary art music in the intellectual life of people hardly exists when compared to, for instance, literature, cinema and the visual arts." In this article Bodin goes "back to the mid-1960s to discover the reasons why contemporary art music has been so discredited in intellectual circles. During the Vietnam era," he states, "the ascendant views of the left shifted the focus of the intellectual community to the popular protest movement. All art music was seen to be elitist and electroacoustic music in particular was suspect because it relied on tools developed by the military-industrial complex." Bodin believes that composers have themselves to blame for the current neglect of their music. He believes that "contemporary art music has nothing to say to the audiences of the 1990s or perhaps it says the wrong things because it has been based on the threadbare paradigm that equates accomplishment in music to that of research in the natural sciences." Bodin cites chaos theory, neural networks and fractal algorithms as just the latest manifestation of this trend. The electroacoustic music scene in Sweden may indeed be as bleak as Bodin paints it and this may be because the generous state support it has received over the last twenty years has tended to isolate its composers. In America no such largesse exists and our own electroacoustic music community is divided somewhat differently. I think we can observe three groups who are separated not by musical style but by other factors. The first group consists of those interested in the experimental nature of the equipment and includes composers such as Alvin Lucier, Todd Machover, Larry Polansky and Laurie Spiegel. The second group are those interested in the experimental nature of sound such as Eve Beglarian, Paul Lansky, Ingram Marshall, Pauline Oliveros and John Zorn. The final group use technology for conventional musical applications and some examples are Wendy Carlos, Suzanne Ciani, Chick Corea, Philip Glass and Frank Zappa.

More than twenty years ago, in a book Electronic Music: A Listener's Guide, written by Elliott Schwartz, Steve Reich declared that "electronic music as such will gradually die and be absorbed into the ongoing music of people singing and playing instruments." Perhaps we have already reached this stage and today we should be celebrating both the appearance and disappearance of electroacoustic music. Or like the technology on which it depends, is electroacoustic music all around us but invisible?

 

Technology, Commodity, and Power
ROGER JOHNSON

The 1952 performance of the tape music of Otto Luening and Vladimir Ussachevsky at the Museum of Modern Art has come to represent the beginning of a major shift in the relationship between music and technology, and not just within vanguard electroacoustic music but for all aspects of music production and distribution.

The fact is that the vast majority of the music made and heard now is actually electroacoustic music. This condition is obviously part of a much larger social and cultural shift from a print-based, and, in music, notation and live performance-based, information and communication systems toward those increasingly determined by electronic media.

During this past 40 years we have seen enormous changes in the practice of music, in its tools and technologies and in its social and cultural conditions. In 1952, Modernism was still the dominant ideology of the arts. Formalism, avant-gardism, experimentalism, internationalism, the "liberation of sound," the idea of progress, the search for artistic freedom and individualism were all still powerful motivators within the small but confident circle of those involved in new music. This music had become a new form of insular and elite "court music" with very little direct influence outside of itself. However, this was accepted and understood then as a transitional condition leading to eventual acceptance and even possible reunification with the mainstream tradition of 19th-century European concert music.

What we could call a musical class structure was firmly in place as well in 1952. Art music was serious, idealistic and important, even if few understood it. Elitism was a badge of honor; composers were almost all well-trained upper middle class white men with secure patronage in the academic world. Jazz, too, was in its own "golden age" but clearly separate and apart from art music, even if some understood it as a parallel artistic tradition of considerable significance. Of course, popular music was considered by this cultural elite to be trite, vulgar and commercial; folk, ethnic and vernacular musics were quaint and of interest only to subcultures and anthropologists.

Electronics, technology and the power of science and engineering were also enormously powerful and exciting at mid century. The atomic age was upon us, television was in its infancy, computers were just being created, the car as a dominant American symbol was rapidly being developed; the scientific horizon seemed unlimited. One suspects that most listeners at the Museum of Modern Art in October of 1952, and in many subsequent performances of early electronic music throughout the world, were so caught up in the novelty of the sound and the urgent debates about art, technology and liberation vs. dehumanization that the music itself was not really heard. All new media initially exert this effect of calling considerable attention to themselves directly before they become familiar, ubiquitous and seemingly neutral and natural.

Now, forty years later, we find ourselves in this Post-modern period of enormous stylistic diversity and eclecticism. Most of the old canons of new music are gone; there is a much wider range of musical practice now and a much larger and more diverse group of those practicing it. Musical technologies have completely transformed the ways in which music is made, heard and understood. And these technologies are incredibly sophisticated, relatively inexpensive and readily accessible. But, while this technological revolution has given all the vanguard media arts fantastic tools, the media culture which these tools have created has simultaneously marginalized and contained their voices and significance in the culture at the same time. By empowering a global music and broadcasting industry, these technologies have actually shaken the very foundations of the cultural hegemony of art music.

In order to approach any degree of understanding of the past 40 years of electroacoustic music, to have any hope of analyzing its present conditions or anticipating its future, it is necessary to take a broader and more inclusive look at the cultural realities of our time. For art and all cultural activity has always been part of a complex web of social, political and economic conditions. We must explore the technologies and media through which the music is made, heard and disseminated. This is of particular importance to the understanding of the sophisticated media arts (photography, film and video as well as electroacoustic music) since they share the same technologies with the commercial industries. These arts are both empowered and contained by these technologies and by the realities of this global entertainment industry. We must carefully examine old myths and ideologies, particularly those which represent dualistic thinking (art vs. entertainment, elitism vs. popularity, integrity vs. commercialism) and accept some new realities about what art making is really about.

This particular paper seeks to explore some of the present conditions of electroacoustic music and other media arts from the premise that art is, and always has been, ultimately about power and the processes of empowerment. Power here is understood on several levels: technological, cultural, economic and ideological. In this sense it follows up directly on some of the basic arguments presented by Jacques Attali in his book Noise: The Political Economy of Music (Attali 1977, 1985) and in the larger project often referred to as "cultural studies" which seeks to understand artistic work in more broadly social terms. More specifically there is the assumption that we must look at all art-making and understand that the "serious" electronic arts and the popular or commercial ones are not somehow opposites of each other but rather parallel and increasingly overlapping activities which can shed a great deal of light on each other. And on the deepest levels they are all shaped by the same cultural, social and economic conditions of our time. The evidence of an increasing openness to this kind of integrated understanding is seen in both the enormous diversity and eclecticism of artistic work in our time and in the growing interdisciplinary content of the critical discourse.

Technology and Media

Electronic technology itself was nothing new to music in 1952. In fact, music became engaged with technology originally through broadcasting, recording and film, and they had been affecting and transforming music for some time at this point. Radio broadcasting had significantly extended the range of music's reach by amplifying the otherwise local performance. It had begun as a direct extension of the power and authority of the concert hall. But by mid century, radio was becoming an important stage of its own. Real-time direct-to-disc recording had also had an obvious impact on music by 1952, conquering the immediacy of live performance by allowing sound itself to be stored and repeated. These technologies had established and empowered the core broadcasting and recording industries through their ability to commodify music directly and provide products with considerable commercial value. In 1952 the music industry was poised for its major expansion and consequent transformation of music from a performing art to an electronic one.

Early radio and recording, however, went to considerable lengths to avoid seeming to usurp or challenge the primacy of live performance in music. Of course, the sound quality itself initially made that unlikely. Instead, these early media positioned themselves as "objective" and neutral observers but not creators of an event, essentially extending the role of listening and participation. The power of these media came, then, as an extension of the power of the original event which was usually from a culturally anointed space such as a concert hall, theater or other public arena. This strategy enabled the acceptance of early recording and broadcasting, even with poor sound, since it delivered the unquestionably significant event in an exciting and novel form.

In broadcasting, the role of the host or commentator as symbolic listener or observer was developed very early for the same reasons, and remains central to radio and television in our own time. In recording, the parallel effort was to capture the natural acoustics and perspective of the listener in a concert hall, again as a means to establish the authority of the medium as observer. Author Marshall McLuhan in his book, Understanding Media (McLuhan 1965), observed almost 30 years ago that the content of every new medium begins with that of the older or prior one. This is quite observable in early broadcasting and recording. This role was not only to naturalize the process and minimize the shock value, but more importantly, to represent the electronic event as an objective and authoritative transmission. Through these means the medium became powerful, seemingly natural and invisible all at the same time.

It was also through radio broadcasting, popular music, and even the telephone, that the power of electronic technology to represent its own authority directly was first discovered. Roosevelt's Fireside Chats, the crooners and swing bands of the 1930s and '40s, and so much popular music since then, all constructed a different kind of relationship with the listener. There was no longer a sense of big speech or big music in a big space with a big audience, but instead a direct and intimate line to each individual listener. The symbolic distance between speaker and listener was closing even as, or even perhaps because, the real distance, both literally and in terms of intervening technology, was increasing. In the visual media, photography and film did exactly the same thing, and more recently television and video technology have taken it further.

As technologies improved, they presented an ever more complete and accurate representation of sound with the gradual minimizing of noise. The effect was that the seeming gap between the originating event and the receiver or listener began to diminish. The medium seemed to minimize its own presence and bring the listener ever closer to the source. In fact, the history of electronic media has continued to be the gradual closing of this symbolic gap between the sender and receiver and the resulting sense of invisibility or inaudibility of the intervening technology. The better the technology, the more we ignore its presence, the more we feel we are able to receive a complete and seemingly unaltered message.

The electronic media, then, have gradually assumed the power and authority which had previously been centered in the various public arenas and institutions, concert halls, theaters and the press. Most importantly, they did this not in some direct and obviously authoritarian way, but by simply, and seemingly without effort, closing the gap and reconstructing the relationship between the information and the receiver. All the while, they became increasingly invisible and inaudible as technologies and media themselves. We know, however, from studies in film, communication, and now in music too, that this is a critical point in which the understanding of the effects of the medium are ever more important since they are now more subtle yet powerful.

What was new, then, about the 1952 concert was not that the sound was being reproduced via electronic means and played on loudspeakers (though this itself was unusual in a concert setting at that time) but that the music originated within the technology itself without reference to a prior live performance. It called attention to the technology. More importantly, it placed the creative source and located authorship and power directly within the medium to a much greater degree. The timbres, the malleability of presence and the new sense of acoustic perspective must have seemed particularly striking as they did throughout the first decade or so of electronic music. On one level this was a perfectly natural extension of this process of centering creative activity in the medium itself and bringing it into an ever more active role. After all, film had been making a similar transformation of live theater and literature for many years.

It is important to understand that the availability of audio tape and other related technology which launched early vanguard electroacoustic music in the 1950s, and indeed all subsequent innovations in musical technology, also empowered the producers of popular music. They understood the possibilities of multitrack recording sooner than art music composers did, and they were much less hung up on "realism" or "naturalism" in recording than classical engineers were. Les Paul understood the malleability and multiple perspective of electronic music as well as Edgard Var?se did. He simply put it to a completely different use. The past 40 years of popular music production and recording has been an enormously important arena in the development of electroacoustic music, arguably more so than what is left of the vanguard art music traditions. Many "serious" musicians have a difficult time hearing popular music objectively because of its relative barrenness in the pitch domain and the adolescent testosterone levels of its surface. But in the area of timbre, synthetic signal and acoustic processing, and use of multiple perspective audio imaging it has become enormously sophisticated. I think the best electroacoustic music now is that which comes from an understanding of this relationship and enthusiastically builds upon the strengths, traditions and innovations of art music and popular music.

Recording and broadcasting had also begun to set in motion a powerful new kind of commodification and industrialization of music, creating musical objects which had a direct use value, and therefore a market value, far more immediate than a musical score or a concert ticket. As the quality of both media improved and as they became increasingly ubiquitous, it is no surprise that by the early to mid 1950s the recording-based music industry was in full ascendency. Television also usurped radio's broad-based communication role, and the latter was quickly enlisted to promote and market popular music. The rest is history and the global music industry we know today.

Recordings, and the broadcasting of recordings, also had a powerful effect on classical music and jazz. As their quality improved (the more people heard the perfection and predictability of recorded music with the ease of a switch) the more they became texts themselves and even the reference for subsequent live performance. As such, they challenged the authority of live performance, tending to enshrine a particular "performance" in the past to almost mythic proportions. My earlier article, "Music and the Electronic Media," (Johnson 1991) explores in more detail the connection between recording and the rigidifying of classical music and pre-crossover jazz. As the recording gradually assumes authority over the live performance, the effect is to further place authority in the past. A particular performance of a score or a spontaneous solo by Charlie Parker, even when recorded quite informally, becomes a text now to be heard over and over, studied, dissected and even enshrined. While these products had relatively limited commercial value they became important ideologically as official culture.

This situation of a rapidly expanding popular music industry and a rapidly rigidifying classical music world (both driven by recording) has left new and independent music, both live and recorded, increasingly marginalized. This classical music world belongs neither to the gilded concert halls of nostalgia nor to the commodified air waves of popular culture. It serves no larger social interests except on occasion as a testament to freedom or individualism. It wields no power. It sells no commodities. It stands for nothing, critiques nothing, is passionate about nothing. It speaks neither of a larger community nor even a subculture. So much work from the academic and art music traditions seems sophisticated but empty at the same time, expressing a kind of fragmented, often alienated, sometimes angry, sometimes ironic individualism. So in celebrating the anniversary of the 1952 concert we are also noting a clear and obvious decline, not so much in the amount of sophisticated work being done but in its almost total loss of significance, urgency and power in the larger culture.

Power

Ultimately this is an issue of power. Music has always been about power. Sound itself is power, and it was very clearly understood in that sense in traditional cultures. Empowerment is then represented in socially and economically determined sites from sacred sweat lodges or stone henges, to cathedrals and courts, to concert halls and universities, to ABC or MTV. A musical performance in Carnegie Hall is obviously much more powerful and significant than the same or even better performance on the street. Throughout its history all art has sought entry to empowered sites, and those in power have also sought to use the arts for their ability to represent, legitimize and extend power. In appropriating the power of music the early social institutions (church, court, military) knew exactly what they were doing, for music gave legitimacy, significance and power to them. Art, architecture, design, and theatre were all actively cultivated for the same reasons.

After the industrial revolution, power and patronage shifted from the landed aristocracy to industry, capital and to the emerging bourgeois class. Artists ceased being servants of power directly, and instead became entrepreneurs who sold their own services in this new marketplace. This phase corresponds directly to the emergence of the large public concert halls and opera houses, which were themselves sites of enterprise, centers of power and representations of the dominant ideology of capitalism. With modernism the artist's role shifts and becomes interpreted as the search for individualism and the private self. Art historian Carol Duncan has stated this shift most succinctly. "Art and discourse in the nineteenth century distorted and idealized the external world and celebrated it as Beauty. Modern art celebrates alienation from that world and idealizes it as Freedom." (Duncan 1983) The arts have always represented, legitimized and empowered the prevailing social ideology, which is often directly at odds with social reality, and in this sense have been very central to the maintenance of cultural hegemony.

Noise is power too, but is generally represented as negative, chaotic, dangerous, violent, antisocial and subversive-particularly when it comes, as it so often does almost by definition, from those marginalized from power but seeking to appropriate it. One of the projects of avant-garde music has been this ideological battle over noise, expressed as a search for freedom and "liberation of sound." Rock, Heavy Metal, various alternative popular musics, and Rap have all been deeply involved with the power of noise, though here it is even more clearly political and ideological. At the opposite level R. Murray Schafer, in his book The Tuning of the World (Schafer 1977), develops what he calls the "Sacred Noise" which is a particular form of socially sanctioned and empowered sound which is louder than all others and not subject to restrictions. Whether it was the church bell of the Middle Ages, the factory of the 19th Century, the airplane or even the mass media of our own time, the important feature is its direct representation of dominant power.

Like noise, technology also quite clearly has to do with power. All tools empower their user. A musician with a musical instrument, or even a good voice, has power. It is the power of sound itself and the power of the cultural traditions which the instrument or vocal tradition represents. Amplification and the power to make sounds electronically is also power, quite literally. A rock group with a stack of amps is obviously empowered, and its music is more audible over a larger distance. To tap into the electronic media, to be on radio or television, to record music are all extensions of amplification and of its power. And in these cases the power resides not just in the technology but more importantly in the social and economic powers of the medium.

Electronic media are clearly the empowered sites of our own time. They hold the power of communication and the control of information. They originate in the science, engineering and industry which are of course the very core of power in an industrial economy. Just as the arts of the past courted, and were courted by, the dominant powers of their own time, so today the electronic technology and media represent the central arena of significance of our own time. Therefore it is important to explore how this complex system of technologies and information systems impacts on music and how it brings into the foreground some new and interesting struggles, contradictions and paradoxes.

It is obvious that new technologies, audio tape, synthesizers, and digital sound systems have infused music with new resources. They have revolutionized our timbral material and our abilities to simulate, control and manipulate acoustical/environmental conditions. They have allowed the composer to work directly with sound rather than notational representations and to fully realize the compositional intention. The paradox is that electronic sound still seems to many to lack the richness and depth of acoustic sound, and fully realized compositions eliminate the spontaneity and drama of live performance. Of course, most of us now understand these to be different musical media, each with its own separate attributes, like theater and film or painting and photography.

On a more subtle level, electronic music has reconstructed or at least challenged musical autonomy and authorship. In traditional art music, the score is understood as an original creation of a composer, an idealization of a piece, and the live performance is understood as one of the many specific and unique readings of that score. The composer's authorship, followed by the performer's, leads to the uniqueness of each performance. Interestingly, in classical music with its fixed and permanent repertory and cult of virtuosity, the only real arena of authorship left now is the performer's, this despite the lip service paid to the score, the "composer's intention" or historical accuracy. In jazz and other improvised or more indeterminate pieces, the moment of performance contains an even greater degree of real authorship. In the case of recording, the process is aborted by the repetition and permanence of the performance. In electroacoustic music, whether vanguard or popular, the process is further imploded since there is no gap at all between the composition and the one and only "performance." In fact, it makes no more sense in this case to speak of composition and performance than it does in the case of the writer or painter.

There are some interesting issues concerning autonomy and authorship with technology. Western culture has a great deal of investment in individualism and particularly in the idealization of the autonomous artist. As we have seen in the past, composition and performance actually represented shared authorship, though we have greatly romanticized both the composer and virtuoso performer. Musical traditions, musical "language," and musical instruments, all products of culture, have always shared significantly in musical authorship. Where would any artist be without a language to speak and means to speak it? Where would Bach be without the organ, or Chopin without the piano, Beethoven, Wagner or Stravinsky without the orchestra? Where would any composer be without a sophisticated musical language, sufficient technology and a musical culture?

The new musical technologies, software and networkings of our own time both empower the composer and play a significantly determining role in the music produced. Electroacoustic music is very dependent on its hardware: synthesizers, samplers, computers, and recording technology. It is also more and more determined by its software. In fact, all technology is increasingly soft, informational, digital, representational, simulational, is it not? Our images and metaphors of it are now less mechanical and physical, more organic, biological, mental and psychological. This is a domain of language, of thought, of sensation, of process. The products of these systems are also increasingly less individual, less separate, less fixed, less permanent, more collaborative, malleable, flexible, interactive and situational. Isn't the rhetoric of the great individual artist and the permanent artistic document being eroded before our eyes and ears? Just as freedom was the myth of Modernism, perhaps autonomy is closer to the myth now.

New technologies and media also create new networks and empower new and different people, as they spread out and stimulate creativity. Some of the composers who work in electroacoustic music have come to it through traditional art music, but many more have not. I recall in the late 1950s and '60s being struck by how many more women there were engaged in electroacoustic music than in the more established forms of traditional composition. This came in direct conflict with prevailing gender stereotypes about women and technology. But they were there precisely because of this empowerment.

Popular music and the music industry have also brought many people into creative musical and technological activity who share none of the traditions and training of art music. This process has accelerated greatly in recent years and on a global scale, allowing many previously excluded peoples and cultures to have a voice and an audience. Of course this process invites considerable manipulation and enterprise, but even here these players are as often as not new to the process too, and bring fresh ideas and new meanings into the marketplace just as the artists do.

Of course, in popular music the powers of technologies and media are ultimately commercial powers, the power to create commodities, markets and the whole industry. At the deepest level this is an ideological power, the power to communicate, the power to control the messages, the power to represent and encode-and at the same time often mask-dominant interests and values. Given the powerful and heirarchical industrial structure of popular music, it is interesting to look at the actual content of the product, its music, its lyrics and its persona. Instead of celebrating its power and homogeneity directly, the products are presented very individualistically: iconoclastic personas, individual dramas, outsiders, rebels, glamorous and exciting people singing of passions and private lives, of hopes and fears, of lonely journeys, of return to mythic communities. Of course, this is romanticism in its purest form, remarkably reminiscent of the idealizations of the later 19th Century, as Robert Pattison has ably developed in his book The Triumph of Vulgarity (Pattison 1987).

The important point here is that the power of technology, of the industry and of the medium functions as a stage for playing out some basic myths of freedom and individualism together with those of culture and community. In many cases this drama has become rather frantic as these myths and realities increasingly clash. There is also considerable critical debate now about the degree to which the industry and the dominant powers can even control the content, the messages and the meanings of its products. Disempowered peoples, particularly minority groups, often reinterpret the signs and codes of cultural commodities for their own interests, turning the cultural and ideological content of these products completely around. In this sense, technology represents a power and authority to be engaged, humanized, reconstructed, challenged, infiltrated or disempowered. Some media artists and consumers become like hackers invading and manipulating the systems, challenging authority, seizing power. Perhaps this is the Achilles heel of electronic and digital information, that it can be so readily infiltrated, remade, reused, recombined, reinterpreted, and appropriated, that its authority is invariably less fixed.

To me, this tendency toward decentralization, even in the face of the enormous concentrations of power of the entertainment and broadcasting industries, is an exciting and optimistic sign. Computers, informational networks and data bases have also shown the same tendency. In some circles there is an argument that technology and informational systems invariably decentralize and challenge the very powers that created them. This has a romantic and utopian ring to it, but there is much evidence of diversification and multiplicity in our culture at this point. The enormous stylistic and aesthetic range of all the arts now, the empowerment of independent media, the new critical debates around culture and "multiculturalism" are all signs of this same phenomenon.

For the media arts and electroacoustic music, this broad-based tendency toward decentralization suggests that an interesting and important avenue of work-one "cutting edge," if you will-is to explore these new channels, to push them open and keep them open, to challenge and infiltrate the dominant systems, ideas and myths. New opportunities, new networks, new audiences and interesting ways for music with independence and integrity can exist and even thrive in our culture. We should not assume that an elite art music, romantic avant-gardism or any other past model will somehow gain dominance again. Neither should we expect some great golden age of artistic freedom and idealism. We should resist the distinction between the elitist high art and the more popular forms, but instead, try to empower a whole range of styles and practices. The opportunity is here to build upon the power of diversity and multiplicity which are at large in our culture. That same opportunity is here to keep the doors open and the fires burning, to resist closure and dominance, to find new resources, new means, new technologies, new possibilities. This is what Jacques Attali means by "composition" as the emerging phase of musical evolution. The signs are there; the possibilities exist. Just how they can be realized and by whom, what forms they take, what new musics emerge and what new meanings they have all remain to be heard.

References

Attali, Jacques. Noise: The Political Economy of Music. Minneapolis: University of Minnesota Press, 1977.

Duncan, Carol. "Who Rules the Art World?" Socialist Review (July/August, 1983): 99-119.

Johnson, Roger. "Music and the Electronic Media". Computer Music Journal (Summer, 1991):12-20.

Pattison, Robert. The Triumph of Vulgarity: Rock Music in the Mirror of Romanticism. Oxford: Oxford University Press, 1987.

Schafer, R. Murray. The Tuning of the World. New York: Alfred A. Knopf, 1977.