Jonathan D. Kramer
Technology is ubiquitous. Thus it is hardly surprising that it has had a profound influence on the art of music in the twentieth century. It has altered how music is transmitted, preserved, heard, performed, and composed. Less and less often do we hear musical sound that has not at some level been shaped by technology: technology is involved in the reinforcement of concert halls, the recording and broadcast of music, and the design and construction of musical instruments. Many church organs, for example, now use synthesized or sampled sounds rather than actual pipes; instruments are now available that have what look like piano keyboards and make what sound like piano timbres, but which are actually dedicated digital synthesizers; virtuoso performers whose instrument is the turntable are now part of not only the world of disco but also the world of concert music (John Zorn, for example, has written a piece for voice, string quartet, and turntables).1 Technology is changing the essence of music, although many musicians still do not appreciate the extent of its influence.
Technology came to music with the advent of recordings. Thomas Edison invented a crude cylinder phonograph in 1877. By the end of the nineteenth century, companies in the United States and England were manufacturing disc recordings of music. Prior to recordings, home consumption of all music-whether composed for keyboard or not-was by means of private piano performance. The possibility of preserving musical performances by recording utterly changed the social and artistic meanings of music. The invention of the tape recorder a half century later made sonorities not only reproducible but also alterable. The resulting techniques allowed recorded sounds to be fragmented, combined, distorted, etc. Such manipulations could affect not only sound qualities but also timespans. By changing recording speeds, for example, a composer of musique concr?te could compress a Beethoven symphony into a single second or make a word last an hour.
Consider Hal Freedman's composition Ring Precis, from the mid '70s. Freedman has taken a recording of the entire Ring cycle of Wagner-some eighteen hours of music-and arbitrarily cut it up into three-minute segments, all of which are played simultaneously.2 The resulting sound is, doubtless, utterly unlike anything you have ever heard, but I am more interested in the temporal implications of Freedman's compositional procedure. He has compressed by superimposition eighteen hours into three minutes, and thereby created a new piece out of an old one.
An earlier example of the new sound and time worlds opened up by electronic technology. The composition U 47, written in 1960 by Jean Baronnet and Francois Dufrene,3 is one of several compositions that use as their sole sound source a brief spoken utterance. In this case a voice says, in French, "U 47." The composers have fragmented this sound, so that we hear the tiniest segments of the spoken text. Occasionally the voice is elongated. As in Ring Precis, the sounds are fascinating, but so is the idea of isolating in time brief instants of speech.
Today, because of electronic technology, we listen to unaltered music only rarely. The sounds we hear have been not only performed by musicians but also interpreted by audio engineers, who have reinforced the acoustics of concert halls, spliced together note-perfect recorded performances, created artificially reverberant performance spaces, projected sounds across the world via satellite broadcast, greatly amplified rock concerts, and created temporal continuities that never existed "live." The audio engineer is almost as highly trained as the concert performer, and can be just as sensitive an artist.
Recording technology has forced us to reconsider what constitutes a piece of music. It is unreasonable to claim that the printed score represents the musical sounds. The score usually gives no indication of how the audio engineer should manipulate his/her variables. Two differently mixed, equalized, and reverberated recordings of the same performance can contrast as much as two different performances of the same work.
We might think conservatively of recordings as means to preserve performances, but recordings are far more than that. They are art works themselves, not simply reproductions. Thus people who buy records and cassettes rightly speak of owning the music. "Vivaldi's Mandolin Concerto is yours for only $1.00," says an advertisement for a record club.
Here are a few more examples that demonstrate the extent to which audio engineering can create (rather than simply preserve) musical continuities :
According to music theorist Walter Everett, when the Beatles recorded their song "Strawberry Fields Forever,"
two versions were done. It was originally scored for the Beatles and flutes, and recorded in the key of A at a tempo of about 92 beats per minute. After listening to the lacquers, Lennon decided it sounded 'too heavy' and wanted it rescored and performed faster. A second version, with trumpets and cellos, was recorded in the key of B-flat at about 102 beats per minute. Lennon liked the beginning of the first version and the ending of the second, and asked the engineer to splice them together. When the speeds of both tapes were adjusted to match the pitch, the tempos of both were fortuitously the same at 96 beats per minute. The two portions were edited together. . . . This procedure gives Lennon's vocals an unreal, dreamlike timbre, especially in the second, slowed-down portion of the song.4
Here is another example: I have been told of a rock record made by the following unusual procedure: first the solo musicians were recorded as they improvised; then an arranger studied the taped improvisations and composed an instrumental accompaniment, which contained direct references to the recorded music. Interestingly, this material appears in the accompaniment before it does in the improvised solos. We listen to a paradox: the soloists seem to improvise spontaneously music that we have just heard! Furthermore, the record consists of a composed accompaniment that fits the improvised solos too well to have taken place in live performance.
A recording was released a few years ago of George Gershwin's Rhapsody in Blue (1924). The composer is piano soloist and Michael Tilson Thomas conducts the orchestra.5 What is odd is that Thomas was born four years after Gershwin died! Gershwin had recorded the piano solo, and Thomas conducted the jazz band to coordinate exactly with the solo recording, which he monitored through headphones. The performance is somewhat strained, since the soloist never reacts to the ensemble, but the aesthetic behind the recording is fascinating. Technology has created a collaboration between two artists who could never have known each other.
In the song Another One Bites the Dust, as recorded by the rock group Queen, there is a specific verbal message which can be heard only when the recording is played backwards.6 Part of the title line, which is sung repeatedly throughout the song, comes out backwards as "marijuana." This phenomenon depends on the particular pronunciation of "Another one bites the dust." What we have is a hidden message, known only to initiates, which is embedded within the music by means of a quirk of technology. There is a clue that we should listen backwards. Certain musical sounds are recorded backward. For example, we sometimes hear first a gradual crescendo, then a sudden cut-off-the reverse of a sharp attack followed by a gradual decay.
Actually, the use of backward recording in rock music to embed hidden messages or to create special sounds was apparently quite prevalent for a time. Some rock fans were forever listening to their tapes backwards in search of camouflaged meaning.
At their inception, these examples represented extraordinary expansions or even redefinitions of the musical art. But they pale in comparison with the potentials and challenges of sampling. Sampling is the digital recording of sound, which-once it exists in computer memory-is capable of all manner of appropriation and manipulation. By means of sampling any found sound can be incorporated into a performance, composition, or recording. This was true of musique concrète as well, but the flexibility and excellent sound quality of sampling are making its use far more extensive-and, in the view of many, insidious -than the use of prerecorded tape..
Sampling has been prevalent in the world of rap music since around 1987. The inclusion of sampled material into rap songs has been called "ancestor worship." There is an excellent article on the subject in the May 1992 newsletter of the Institute for Studies in American Music.7 Author David Sanjek points to the far-reaching aesthetic and legal ramifications of using sampling to appropriate other music. Sampling artists engage in what is called "versioning," which Sanjek defines as "the practice of taking a given recording and using its constituent elements to create new material." Thus sound becomes democratic. Although copyright lawyers as well as ASCAP and BMI may object, the time may be upon us when no one can own a sound, although the courts are not likely to reach such an opinion without a careful recasting of existing laws.
Most musicians using sampling have played it safe, legally speaking, by distorting their sound bites or keeping them brief, and by not crediting their sources. Some people estimate that the majority of pop music recordings released today include some sort of sampling. Canadian composer/producer John Oswald, on the other hand, has made sampling not a surreptitious activity to enhance recorded sound but his artistic credo. Oswald's use of sampled sounds has created massive legal difficulties. In 1989 he produced a compact disc called Plunderphonics, which includes 24 "revisions," as he calls them, of well-known works, many protected by copyright.8 Although there are imaginative and artistic alterations, the sources are recognizable, because the quotations are quite long and because the sources are from mainstream popular and classical music-the Beatles, Michael Jackson, Captain Beefheart, Bing Crosby, Elvis Presley, James Brown, Franz Liszt, Count Basie, Stravinsky, Beethoven, Bach, Over the Rainbow, and Raindrops Keep Falling on My Head.
Oswald made no attempt to cover his tracks, so to speak. Quite the opposite. He carefully identified every source and every technique of sound manipulation. He furthermore made no profit from the project. He produced a thousand CDs-probably the number of copies of the originals that sell in one day. He began to distribute them free of charge to libraries, radio stations, and the press. The CD carries a notice that the music may be played for anyone and copied by anyone, but not sold. Perhaps predictably, the Canadian Recording Industry Association took a dim view of Oswald's versioning, labeling it copyright violation and theft. After some complicated legal confrontations, Oswald was forced to stop distributing the CDs and to destroy his remaining three hundred copies.
These examples of manipulated recordings and sampling show that recording does more than preserve. In each case a temporal continuum is created that could exist only by recording. Thus records, tapes, CDs, and sampling prove what social critic Walter Benjamin realized back in the 1930s: wholesale mechanical reproduction inevitably changes the nature of art. He wrote:
For the first time in world history, mechanical reproduction emancipates a work of art from its parasitical dependence on ritual. To an ever greater degree the work of art reproduced becomes the work of art designed for reproducibility. From a photographic negative, for example, one can make any number of prints; to ask for the "authentic" print makes no sense. But the instant the criterion of authenticity ceases to be applicable to artistic production, the total function of art is reversed.9
Recording and broadcasting remove music from the concert ritual. Today there are many viable places to hear music besides the concert hall-lounging in the living room, driving in the car, jogging in the park, or picnicking at the beach. Ambient sounds mingle freely with those emanating from the transistor radio or "boom box," to the apparent delight of the listeners. Many composers may still create progressions that define a movement through time from beginning to end, but listeners are no longer slaves to a concert ritual that perpetuates closure. Everyone spins the dial. Technology has liberated listeners from the completeness of musical form. Is it any surprise that some recent composers have cultivated aesthetics that avoid clear-cut beginnings and endings, that they have written music more like a mosaic of loosely connected events than an ongoing progression through time? Such new approaches to musical time are consonant with listeners' abilities to choose for themselves the boundaries of their listening spans. Composers who continue to ignore this fact are in some ways behind their times.
Listeners select not only where they start and stop listening to a piece of music but also in what order they hear its sections. As recordings have progressed from tapes and records to compact discs, random access by listeners has become easier. Even in the early days of recording, however, the ordering of events in recorded music was more arbitrary than in live performance. At first multiple-record 78-rpm recordings were issued in two versions, one for record changers and the other for manual players. A manual-play set of three records would have sides 1 and 2 on the first disc, sides 3 and 4 on the second record, and sides 5 and 6 on the third. An automatic-play version of the same music would have sides 1 and 6 on one disc, sides 2 and 5 on the next, and sides 3 and 4 on the third. Sometimes people would be unable to obtain the correct set and might end up listening to standard repertoire with sections forever in a scrambled order.
Today, most compact disc players can be programmed to play the selections on a CD in any order; some are able to select the order randomly. This process may seem innocent enough when we consider a disc to be a collection of short pieces, but sometimes a CD contains one long composition. For some audio artists the distinction between a CD and a piece of music is meaningless.
In his article on rap music, David Sanjek speculates that soon listeners will have sufficiently sophisticated random access to be able to mix their own versions of prerecorded raw material.10 Sampling technology may thereby succeed in realizing a goal of the experimental avant garde of the 1960s: the breakdown of the distinction between composer and listener, between creator and user.
* * *
Even before audio technology became a sophisticated art, it had a profound impact on musical structure. It is no coincidence that at the same time that music began to be recorded, composers began to reduce drastically the redundancy in their works. Pre-twentieth-century music is filled with repetitions and returns. The intensity in much early twentieth-century music comes from the lack of repetition: Schoenberg's Erwartung is an extreme example, in terms of both intensity and lack of overt repetition.
It is as if composers realized subconsciously that their music would be recorded and thus available to listeners for repeated hearings. As composer R. Murray Shafer has remarked, "The recapitulation was on the disc."11 Music in the early decades of this century became considerably more complex than it had ever been before, and the trend towards ever greater complexities has continued to the present (with notable exceptions, to be sure). The density of information in music has increased exponentially. Gestures have been composed that are so compressed that they can be fully understood only after several hearings, and repeated listenings are possible once the music is recorded.
There has been a reaction to the tyranny of repeated hearings. Many composers have structured their works so that each performance is different. For example, they may give performers a series of fragments to be played in random order. This open approach to form celebrates what recording manages to destroy-the uniqueness of every moment in time. Individual realizations of such music do get recorded, in apparent contradiction of their very meaning, and thus they are inevitably heard again and again. Composer Karlheinz Stockhausen once compared the recording of one version of an open form to a photograph of a bird in flight.12 We understand the picture as showing but one of a multitude of shapes the bird may take. But which is the art work, the bird or the photograph? And which is the composition we are hearing, the abstract open form that we might intuit with the aid of score or program notes, or the fixed, carefully engineered recording?
The ubiquity of recordings has influenced performers as well as composers. Performers routinely learn pieces not just from score but from recordings. I know some major conductors who always learn new pieces from tapes, if they are available. The conductor who refuses to listen to a recording until he/she knows the piece is a phenomenon so rare as to inspire respect bordering on awe. Is it any surprise that performances tend to become standardized, that performances imitate other performances, that many performers subconsciously seek to perpetuate what they view as a definitive performance by a revered master? I was both amused and disheartened a few years ago when I walked into the library of a noted conservatory and saw several young people standing, holding batons, wearing headphones, and vigorously "conducting" the music they were listening to. Who was leading whom?
* * *
Not only did tape recording bring to the audio engineer the ability to splice together artificial continuities, but it also brought to musique concrète and synthesizer composers the possibility of working directly with sound materials. From the simple act of putting razor blade to tape came the most powerful musical discontinuities as well as the most unexpected kinds of continuities. A composition can move instantaneously from one sound-world to another. Just when a splice might occur can be as unpredictable as the nature of the new context into which the listener is thrust.
Today, splicing is done electronically, with far greater sophistication and flexibility than previously imaginable. As a result, recording and performance are diverging into two separate art forms. When we listen to a fine live performance, we get caught up in the sweep of the experience. If we subsequently hear a recording of the same performance, we may be disappointed, because the excitement of live performance-partly visual and partly visceral-cannot be captured on audio tape. Furthermore, if there are a few wrong notes or rhythms in a live performance, who cares? But even a small number of clinkers on a recording-which will be heard again and again, in a more detached way than concert listening-can be maddening. Thus recorded performances seek perfection, while live performances seek immediacy.
The late pianist Glenn Gould retired from the concert stage at a young age in order to work exclusively in the recording studio. He was reputed to have spent only about 10% of his studio time at the keyboard. The remaining time he listened, edited, supervised splicing, etc. His editing was as creative an activity as his playing, and the results indicate that he was after more than note-perfect performances. His recordings have an integrity and a drive that one might not have thought possible to create "artificially." But they are not documents of live performances. They are Gould's legacy, just as surely as Bach's manuscripts are that composer's testament.
Gould did not live into the age of digital editing. The sophistication now available to the recording editor was little more than a dream a decade ago. But now note-perfect recordings are commonplace. Well-done digital editing is very difficult to detect by ear. As a result, audition tapes are becoming a thing of the past. I do not know what orchestras and conservatory admissions committees are doing about the widespread use of digital editing, but I do know that anyone who has the money to spend on studio time can submit a flawless, virtuosic audition tape.
A short while ago, a composer wrote a very difficult chamber piece. After rehearsing for several weeks, an excellent group in Boston was unable to get it much faster than half the indicated tempo. The composer was not perturbed. He had the group record it at half tempo, brought it into a digital editing studio, and emerged with a recording up to tempo. The sophisticated software he used was able to double the speed without affecting pitch, timbre, attack and decay times, or even vibrato rate. The composer then sent the tape and score, without any explanation of how the recording had been made, to an excellent ensemble in New York. This group worked for a long time on the piece, but got quite discouraged at their inability to match the Bostonians' performance. When they learned how the tape had been made, they were not too pleased with the composer.
We may think of what this composer did as a dirty trick. But his technique was not all that different from the compositional methods of Warren Burt, an American living in Australia. Burt used sampling and digital editing to create a recording of an orchestra playing as no human ensemble ever could. His Samples III is, in a sense, the obverse of a traditional recording. Instead of using recording technology to preserve a performance, he used it to create a computerized artwork, the source materials for which were recorded orchestral sounds he had composed. The finished work is not an orchestral performance but a recording based on sampling and manipulation-including altering pitch and/or speed, making canons, microtonal detuning, loops, improvised overlays, etc.
A couple of years ago, I had a personal experience with digital editing that was most instructive. The London Philharmonic recorded one of my orchestral pieces.13 This is a wonderful orchestra, but the music was very difficult, and a three-hour rehearsal plus a two-hour recording session were not sufficient to produce a usable tape. With digital editing, my engineer was able to clear up ragged attacks, change balances, correct accentuation, create sudden splices and gradual cross-fades, and even remove the noises of page turns.
Although splicing technology has become extremely sophisticated in the digital era, its creative potential has been known for generations. Even before the invention of tape recording, the film medium used splicing for many years. Montage techniques originated in Russian and American films in the second decade of this century. By 1922 Soviet filmmaker Lev Kuleshov was conducting careful experiments into the rhythmic effects of film splicing.14 He studied the potentials of discontinuity and implied continuity in both fast cutting (influenced by the American films of D. W. Griffith and others) and slow cutting (with which Russian filmmakers had been working). Kuleshov's experiments and theories had a direct impact on Sergei Eisenstein, whose first film-Strike (1924)-contains many splices. As further technologies and new aesthetic extremes developed in subsequent decades, newer degrees and types of discontinuity became available, not only in film and music but also in drama, literature, and popular culture.
Discontinuity has affected the temporal texture of every Westerner's life. Consider one example: broadcasting. Radio stations present montages of advertisements, announcements, news, weather, sports, features, traffic reports, and music. Television can be equally discontinuous. In a flash viewers are transported from an animated fantasy world to on-the-spot coverage of a real war in a distant land, or from the artificial (but does that word mean anything today?) world of a quiz game to the laundry room of the Typical American Housewife. And think of children who grow up watching 15,000 hours of television between the ages of two and eleven.15 Consider the program "Sesame Street," a major formative influence on young children in the United States: extreme discontinuities, as one short scene leads without transition or logic to a totally different short scene. Watching "Sesame Street" is not unlike listening to the most heavily spliced tape music.
The discontinuities of the media can be extreme. Yet they are readily accepted, to the extent that we have become all but immune to their power. Cultural critic E.B. Hardison, Jr., invokes MTV to exemplify the recent increase of discontinuity:
When Jean Cocteau used abrupt discontinuities in his surrealist film Orpheus the art world was enchanted. How advanced, how outrageous! The discontinuities of Orpheus are trivial compared to the discontinuities accepted as the normal mode of television by TV aficionados of the developed world. The psychoanalytic surrealism of The Cabinet of Dr. Caligari or of Ingmar Bergman's Wild Strawberries is timid compared to the surrealism that teenagers ingest as a daily diet from musical videos, to say nothing of the spectacular happenings that have become standard fare at concerts by popular entertainers like Michael Jackson or Kiss or Madonna.16
Literary critic N. Katherine Hayles, like Hardison, sees the discontinuities of music videos as essentially postmodern. Her ideas can with few changes be adapted to postmodern music, whether technologically composed or not.
Turn it on. What do you see? Perhaps demon-like creatures dancing; then a cut to cows grazing in a meadow, in the midst of which a singer with blue hair suddenly appears; then another cut to cars engulfed in flames. In such videos, the images and medium collaborate to create a technological demonstration that any text can be embedded in any context. What are these videos telling us, if not that the disappearance of a stable, universal context is the context for postmodern culture?17
Because of the splice, electronic music has confronted in direct and unambiguous ways some of the challenges laid down by twentieth-century aesthetics. The precision implied by so-called "rhythm serialism," for example, becomes real when duration is produced by computer calculation or by measuring lengths of tape rather than by instructing performers about tempos and numbers of beats. Electronic technology has made duration an absolute in a far more precise way than serialism ever could. In the 1950s composers realized that recording technology spatializes time in a literal way: 7-1/2 inches of tape equals one second of sound. It does not matter how much or little activity that second contains, nor does it matter whether it seems to be a long or short second. Its literal duration is measurable along a spatial dimension. Thus splicing techniques not only affect continuity but also allow for the composition of absolute durations independent of the music that fills them. Even in the absence of splices, technology favors certain absolute durations. Familiar to composers of tape music is the time interval created by tape head echo-the amount of time it takes for a tape to move from the record head to the playback head of a tape recorder. This timespan is an integral part of Terry Riley's Poppy Nogood and the Phantom Band of 1970, for example.18
A similar effect, but with a longer delay, comes from the use of tape loops, as in Steve Reich's 1966 musique concrète composition Come Out.
The emphasis on absolute rather than experiential time in electronic music might strike a traditional musician as odd or even dehumanized. But music born of technology demands its own vocabulary and syntax. It demands methods and results appropriate to its equipment. Writing near the beginning of the era of electronic music, Pierre Boulez foresaw the potential of the electronic medium to control absolute durations with superhuman precision:
Compared with the capacity of the performer, the machine can, at once, do very little and very much; a calculable precision is opposed to an imprecision which cannot be absolutely notated. . . . The composer can avail himself of any duration, whether or not it is playable by human interpreters, merely by cutting the tape length which corresponds to the duration.19
Some time later, Charles Wuorinen echoed these ideas when he wrote about his 1969 electronic work Time's Encomium.
n performed music rhythm is largely a qualitative, or accentual, matter. Lengths of events are not the only determinants of their significance; the cultivated performer interprets the structure to find out its significance; then he stresses events he judges important. Thus, for good or ill, every performance involves qualitative additions to what the composer has specified; and all composers, aware or unaware, assume these inflections as a resource for making their works sound coherent. But in a purely electronic work like Time's Encomium, these resources are absent. What could take their place? In my view, only the precise temporal control that, perhaps beyond anything else, characterizes the electronic medium. By composing with a view to the proportions among absolute lengths of events-be they small (note-to-note distances) or large (overall form)-rather than to their relative "weights," one's attitude toward the meaning of musical events alters and (I believe) begins to conform to the basic nature of a medium in which sound is always reproduced, never performed. This is what I mean by the "absolute, not the seeming, length of events"!20
Wuorinen and Boulez were right in calling attention to a temporality which is peculiar to the electronic medium, in terms not only of formal proportions but also surface rhythms. Today, splicing by cutting tape has become less prevalent. Rhythms now are either played on a synthesizer or computer keyboard or they are programmed by means of a sequencer or computer subroutine. Sequencers and sequencer-like software have understandably given rise to cliches, but imaginative composers working with powerful systems have created rhythmic patterns of great complexity and beauty.
A composer most of whose output depends on technology is Conlon Nancarrow. His works for player piano are full of rhythms of a sort and complexity that could never be rendered accurately by performers. But the music sounds exciting and unproblematic, because of the precision of the player pianos. Nancarrow has, for example, written music in two or more simultaneous tempos, in which the tempos are related by such complex ratios as 2 to the square root of 2. His Study Number 19 is a three-voice canon in which the voices proceed at independent tempos, related by the ratio 12:15:20. 21
Nancarrow's multiple tempos can result in large-scale polyrhythms. Global polyrhythms are found in other twentieth-century music, not composed technologically but surely influenced by technological aesthetics. In the first of Stravinsky's Three Pieces for String Quartet, for example, there are repeating cycles of different durations, producing a polyrhythm of 23:21. Similarly, Elliott Carter's Night Fantasies uses one complete cycle of a 133:77 polyrhythm throughout its twenty-minute length. Such polyrhythms depend on the subdivision of large timespans into a large number of equal parts. Similar procedures can produce surface rhythms of daunting complexity, as in some of the early Stockhausen piano pieces or many of the pieces of Brian Ferneyhough.
People often wonder how accurately such rhythms can be performed. Before we can begin to answer this question, we should know how accurately simple, traditional rhythms are normally produced by human performers. This question, it turns out, has led to some fascinating ongoing research into performer timing mechanisms, research that is possible only with the help of powerful computers.
Several experiments have demonstrated the nature and extent of rhythmic irregularities that musicians naturally-indeed, unavoidably-introduce into performance. These nuances are foreign to electronically generated rhythms. Performers do not render even the simplest of rhythms exactly as notated. For example, we would expect a series of half notes each followed by a quarter note to be played in the ratio 2:1 (durations from the onset of one tone to the onset of the next). But, in fact, the 2:1 ratio is virtually never heard, except when electronically produced. Psychologists Ingmar Bengtsson and Alf Gabrielsson found that, in 38 performances of a Swedish folk song in 3/4 time with most measures containing the half/quarter rhythm, the actual ratio averaged about 7:4. They discovered different types of systematic variations in different performers, but not one musician came close to mechanical regularity. This explains why it is easy to distinguish the rhythms of an electronic realization from those of an electronic performance of the same music. 22
Compare, for example, two electronic versions of Holst's The Planets, one by Isao Tomita,23 the other by Patrick Gleeson.24 Tomita performs on the keyboard of a Moog synthesizer, while Gleeson often uses several sequencer-like memory units of an Emu synthesizer. The difference is instructive. Tomita's work, despite its electronic medium, has the stamp of human interpretation (I am not claiming that Tomita's version is particularly musical, but it is a performance). Parts of Gleeson's realization, however, are utterly precise, utterly cold. Holst's music demands to be performed, but Gleeson often bypasses the performer. Setting aside the intriguing question of the artistic worth of an electronic realization of a dazzling orchestral score, we can appreciate the difference between rhythms performed by a human and rhythms generated by machinery. This difference is subtle, but the implications are enormous. The simple rhythmic ratios of an electronic realization, though faithful to the score, are something we simply never hear in performances by humans.
Surely, it may be argued, the precision of electronically produced rhythms cannot be totally foreign to our listening experience. A performer can choose to play in a mechanically regular fashion, if a particular kind of music demands it. In fact, the evidence is strong that a performer cannot play utterly regular rhythms! Over fifty years ago Carl Seashore demonstrated as much by asking a pianist to produce a metronomic performance.25 Seashore found that the pianist's rhythmic variations were smaller than when he was asked to play expressively, but that they were nonetheless present. Furthermore, the deviations in the mechanical performance were a scaled-down version of those in the expressive performance. Seashore's conclusions have been verified by Bengtsson and Gabrielsson, working with pianist Lorin Hollander.26
Bengtsson and Gabrielsson (and several others) have continued this fascinating research by constructing synthesized performances in which various mechanical deviations from rhythmic exactitude were introduced. They felt that, if they could come up with a computer program that would produce what sounded like a human performance, then they would have a reasonable model for how humans perform music rhythmically. They added small systematic time variations in not only note durations but also timespans on deeper hierarchic levels. Bengtsson and Gabrielsson concluded that "one actually has to 'shape' each single tone in all . . . respects (which is what the performer does!) in order to give the synthesis a 'live impression.'"27 In fact, a good performer instinctively shapes timespans on many levels: not only individual notes but also motives, phrases, phrase groups, sections, etc. Furthermore, the performer shifts emphasis in order to focus the listener's attention on different levels. The research into performer timing allows us to glimpse the incredible complexity of a performer's timing.
I find this research fascinating. It is leading to an in-depth understanding of a performer's rhythmic nuances and sense of pacing. It is also forcing us to question what is the threshold between a human performance of a simple rhythm and a mechanical performance of a complex rhythm. Computer programmers who have devised software that translates a performance into musical notation have had to deal with this question in a concrete way. Those of you who used the earliest MIDI-compatible notation software realize how critical the problem is. Programs have improved at approximating performances with simple notation, but they still have had to be told how finely to discriminate rhythms.
Bengtsson and Gabrielsson have written about the reciprocal relationship between performance complexity and musical complexity.
Performances which we judge as "good," "typical," "natural," etc., are often extremely complex when we describe them in terms of physical variables such as durations, amplitudes, envelopes, and so on. On the other hand, physically "simple" sound sequences-mechanical duration relations, constant amplitudes, constant envelopes, spectra, etc.-are usually experientially awkward. One is almost tempted to think of an inverse relation between physical and psychological "simplicity": the physically "simple" is psychologically "unnatural/not simple," and the psychologically "natural/simple" is physically complex.28
Rather than trying to incorporate the complexity of performance nuance artificially into electronically generated rhythms, composers should look to the capabilities of machines to produce intrinsically complex rhythms. The electronic medium provides a context in which composers may trade in the physical complexity of performed rhythms for the conceptual complexity of composed rhythms. Flexible computer software or a versatile sequencer can perform rhythms of great complexity with no greater effort than might be expended on the simple surface rhythms of Bach or Holst. The "precise temporal control" that Wuorinen calls for to replace the lost nuance of performance is readily realizable in rhythms too complicated to be performed (although not too complex to be conceived) by a person. Such rhythms can live and sing, although their song is not of human performers. These are rhythms born of and appropriate only to electronic technology. They are rhythms that celebrate the total uniformity of the sequencer and precision of the computer. They produce a music that is a true expression of the electronic age, a music born of technology, a music that is more than a pale imitation of performance practice.
Thus, I have begun to look to complex-ratio rhythms as one possible source of a rhythmic language that belongs to the computer. I am seeking rhythms that may be too complex to be performed accurately but that are nonetheless readily processed in the listener's mind. I am taking an approach in some ways opposite to those of Nancarrow, Carter, and Stockhausen, cited above. Instead of subdividing large timespans into equal parts, I am intrigued by grouping together in various ways extremely fast pulses. What results is an extreme of additive rhythm.
I have made a number of computer-generated tunes, in which the literal pulse-which is never actually sounded-is so fast that it could not ever be heard. For example, I have generated a tune with the metronome set to beat at 7200 beats per minute. If this pulse were heard, it would be a pitch-approximately the B on the second line of the bass staff. But the pulse does not sound. Rather, the shortest duration consists of 24 of those pulses, which add up to a fifth of a second. There are four different durations in the tune, related by the ratio 3:2. The musical notation would be extraordinarily cumbersome, but the music does not sound all that complex rhythmically. If one used the same tune with a different, slightly more complex ratio-5:3-the musical notation would be extraordinarily complex, but again the sounding rhythms would seem comfortable. On the other hand, the same tune with durations in the ratio of 7:5 sound more irregular, and to my ear more interesting. In order to get the 7:5 ratio at this tempo, the pulse had to beat at a rate of 37,000 per minute. Perhaps this is the first music you have heard with a metronome marking of 37,000!29
* * *
There is one remaining area of musical technology I would like to mention briefly. It is in a sense the most important. For the next three days the Association for Technology in Music Instruction will hold a series of lectures, demonstrations, and workshops covering the varied uses to which computers are being put in the service of education. In addition to familiar ear-training and pedagogical programs, recently developed software is serving to teach students how to listen. Using Robert Winter's Voyager programs on Beethoven's Ninth Symphony and Stravinsky's Rite of Spring, students are able to hear instruments in isolation and then in orchestral context, to compare expositions and recapitulations directly, compare variations with their underlying themes, compare similar phrases, juxtapose conflicts with their resolutions, hear imitation demonstrated, learn the timbres of instruments, etc. This software, recently proclaimed for its sophistication, is already being superseded by other CD-ROM programs.
I can barely do justice to the enormous proliferation of educational and research tools now available on computer. These programs represent just the beginning of interactive music education by computer. There is currently interactive software under development, that will teach students critical listening skills, analysis, theory, world music, orchestration, Schenkerian analysis, and even composition. Although some software is na?ve, I am nonetheless excited by the possibilities, particularly of those programs aimed at young, musically inexperienced users. Many of the problems musical institutions are facing today-lack of funding, lack of appreciation, lack of audiences, lack of interest-are traceable to the shrinking of music education in schools a generation ago. Today's young adults often had little exposure to music making in their formative years. We should not be surprised if they do not have much interest in music now. Funding for music education of the young is not likely to improve dramatically in the foreseeable future, but perhaps the computer can be a partial substitute. Many schools do have computers. Indeed, many families do. And software which is both educational and enjoyable could go a long way toward educating the next generation in the wonders and joys of the art of music.
There is one way in which the use of technology has already become widespread, possibly to the detriment of education. Several young composers now routinely make sampled performances of their works in progress. While this process is undoubtedly helpful in the short run, it does little to improve a student's powers of aural imaging. Does a student composer truly learn how to imagine what he/she is composing, when a flawless-indeed, a too perfect-realization in sound is at hand? I know student composers who have been confused when their new work has come before live performers: rhythms are different, balances are different, timbres are different, and the whole mood is different from what the synthesizer produced.
Since it is now possible to play music into a machine and receive a printed score of it as output, it is possible for a composer never to learn notation at all. I have mixed feelings about this development. Anyone who wishes to be a composer in the traditional sense of the term ought to understand the written language by which a composer's wishes are communicated to performers. But why should a composer working directly with sound, producing not scores but recordings, learn an irrelevant symbolic system? Furthermore, the availability of the appropriate hardware and software has opened the compositional experience to a wide clientele. It may not be possible to learn the art of instrumental or vocal composition thoroughly without mastering notation, but computers make it possible for anyone to experience the thrill of creating music. I am hoping that this thrill will lead many young people to a lifelong involvement with music. If it does, then the computer will have succeeded in replacing to some extent the programs of music education that are disappearing from our schools. It remains to be seen whether a generation raised making music with computers will become subscribers to concert series and purchasers of recordings of traditional music. But I am predicting at least some degree of success.
* * *
Where does all of this leave the traditional musician? As sampled orchestras begin to replace live pit orchestras, musicians have a right to feel threatened. But the art of music will not die. It will continue to be profoundly changed, as technology becomes ever more pervasive and sophisticated. A number of people have been calling for appropriate responses in the programs that train tomorrow's performers, composers, theorists, and historians. Change has been slow, but things are beginning to happen.
At the very least, our conservatories and universities must train their music students to understand and respect technology, not to fear it. A young violinist may still spend countless hours alone in a practice room, improving his/her sound. But how often will that sound be heard without the intervention of recording, broadcasting, or acoustic-reinforcement technology? That violinist need not become a technological expert, but at least must learn what technology is capable of doing and how to communicate with engineers. Any musician who does not know the meaning of words like equalization, digital editing, sampling, reverberation, mixing, etc., is out of touch with his/her art and is, in a real sense, illiterate.
Technology has become an integral part of most aspects of our lives, including the ways we hear, compose, and perform music. It used to be fashionable to speak of our era as one of transition. Today we can be fooled into believing that the transition is ending, as postmodernist aesthetics have produced superficial (and more apparent than real) returns to earlier styles. I believe, on the contrary, that the transition in the arts will end only when people-artists as well as audiences-confront the full impact of the technological revolution. Whether our music is to be tonal or atonal, chaotic or ordered, harsh or gentle-these are not the important questions. What our music (the music we perform, hear, and produce) tells us about our technological culture is a far deeper indication of our society's temperament.
Portions of this article appear in Jonathan D. Kramer, The Time of Music, New York: Schirmer Books, 1988
1John Zorn, Forbidden Fruit, Elektra/AsylunVNonesuch Records 9 79172-2 (1987).
2Hal Freedman, Ring Precis, Opus One Records 58 (1982).
3Jean Baronnet and Francois Dufrene, U 47, Mercury Records SR-2-9123 (1960).
4Walter Everett, "Phantastic Remembrance in John Lennon's 'Strawberry Fields Forever' and 'Julia,'" 72 (1986): 377.
5George Gershwin, Rhapsody in Blue, Columbia Records M-34105.
6Queen, Another One Bites the Dust, Hollywood Records HR 61265-2 (1992).
7David Sanjek, writing in the Newsletter of the Institute for Studies in American Music 20 (May 1992).
8John Oswald, Plunderphonics, Mystery Lab (1989), reissued on Blast First DISCXO01CD (1994).
9Walter Benjamin, "The Work of Art in the Age of Mechanical Reproduction," trans. Harry Zohn, in Illuminations (New York: Schocken, 1969), p. 24.
10Sanjek, op. cit.
11R. Murray Shafer, The Tuning of the World (New York: Knopf, 1977), p. 114.
12In a private conversation, San Francisco, 1967.
13Jonathan D. Kramer, Musica Pro Musica, Leonarda Productions LE-322 (1990).
14Jay Leyda, Kino: A History of the Russian and Soviet Film (New York: Collier, 1960), pp. 170-74.
15Marie Winn, The Plug-In Drug (New York: Viking, 1977), pp. 3-11.
16O.B. Hardison, Jr., Disappearing through the Skylight: Culture and Technology in the Twentieth Century (New York: Viking, 1989), pp. 178-79.
17N. Yi-Atherine Hayles, Chaos Bound: Orderly Disorder in Contemporary Literature and Science (Ithaca: Cornell University Press, 1990), p. 272.
18Terry Riley, Poppy Nogood and the Phantom Band, Columbia Records MS 7315 (1969).
19Pierre Boulez, "'At the Ends of Fruitful Land. Die Reihe 1 (1958): 21, 23.
20Charles Wuorinen, notes to the recording of Time's Encomium, Nonesuch H-71225 (1969).
21Conlon Nancarrow, Study Number 19 for Player Piano, Wergo Records 6169-2 (1991).
22The research supporting this surprising conclusion is reported in many sources. See, for example, Alf Gabrielsson, "Perception and Performance of Musical Rhythm," in Manfred Clynes, ed., Music, Mind, and Brain: The Neuropsychology of Music (New York: Plenum, 1982), pp. 163-68. Also, "Interplay between Analysis and Synthesis in Studies of Music Performance and Music Experience," Music Perception 3 (1985): 59-86.
23Gustav Holst (arr. Isao Tomita), The Planets, RCA Records ARL 1-1919 (1976).
24Patrick Gleeson, Beyond the Sun, Mercury Records SRI-80000 (1976).
25Carl Seashore, Psychology of Music (New York: McGraw-Hill, 1938), pp. 247-48.
26Ingmar Bengtsson and Alf Gabrielsson, "Analysis and Synthesis of Musical Rhythm," in Studies of Musical Performance, ed. Johan Sundberg, (Stockholm: Royal Swedish Academy of Music, 1983), p. 37.
27Ibid., pp. 38-42, 46.
28Ibid., p. 58.
29For a more detailed report on this research, see Jonathan D. Kramer, "Durations from Nested Ratios and Summation Series: Toward an Approach to Rhythm Appropriate to Computer Composition," in Proceedings of the ACMA 1995 Conference (Melbourne: Australian Computer Music Association, 1995).