Wintergatan — Technical Builds, Technological Skills, and a 2000-marble machine

Wintergatan:
Technical Builds, Technological Skills, and a 2000-marble machine
by Mark D. Williams

While discussion of the history of computer music is often decidedly America-centric, technology and music have been utilized and combined in numerous influential, innovative, and instrumental ways throughout history around the world. One such non-American example I believe to be deserving of a place in the pantheon of computer music is the Swedish group Wintergatan. An instrumental “folktronica” group formed in 2012 in Gothenburg, Sweden, they describe the project as a “mix of Music, Engineering and Innovation”. They utilize and repurpose unconventional instruments and technology to create original sounds, and often cross into different genres but still continue to keep their own unmistakable sound, sans vocals. Whether it’s through live instruments, technology, or somewhere in between, Wintergatan’s impact in this new modern era of computer music on the combination of music and technology worldwide is undeniable (and incredibly significant).

While many of their pieces involve live instruments (often unconventional ones), it’s often their use of odd technology (or how they twist the usage of existing technology) that makes for an unexpectedly-unique combination that defines their music profile. They often use odd or obscure live instruments (theremins, dulcimers, saws, electric autoharps), repurpose forgotten technology as modern instruments (including typewriters and slide projectors), or invent new electronic instruments themselves (like the “Modulin”, a DIY modular synthesizer played like a violin). In the track “Valentine”, for instance, they combine chiptune-like synths of the ‘90s, modern electronic risers of the 2010s, and a melody on their distinctive signature vibraphone. (Additionally, the song’s staccato percussion is actually played on both a drumset and a typewriter, as can be seen in a live Steadicam one-take performance.)

 

However, Wintergatan are perhaps most known for their innovative instrument and complex contraption called the “Marble Machine”. The machine is an astounding feat of engineering and programming, an innovative piece of technology that took 14 to 16 months to make (from 2014 to 2016, built by band member Martin Molin). The Marble Machine contains over 2,000 marbles in a 3,000-part Rube Goldberg-esque network—that, when powered up, is able to play music. The group published an official music video in March 2016, with an original song aptly titled “Marble Machine”:

 

Hand-cranked and semi-automated, this organ-sized birchwood behemoth was humorously described by MusicRadar.com as looking like “a cross between a xylophone, an antique printing press and a spinning machine” (couldn’t have said it better myself!). However, it utilizes both live instruments—a vibraphone, bass guitar, and cymbal—and electronic ones—with digital contact microphones hooked up to replicate the sounds of a kick drum, high-hat, and snare.
The thousands of marbles are dropped onto the instruments by what Molin calls the “programming wheel”, an automated conveyor belt contraption with an inner system of wooden gears, and a music-box-like outer ring built out of Lego Technic pieces.
It’s partially hand-powered—with a hand-crank, user-controlled bass strings, and a manual brake punderfully labeled “BRAKEdown”)—and partially automated—with the contact microphones hooked up to a digital sound system, and the complex conveyor belt automating the precise process of dropping thousands upon thousands of marbles. 

Even though the music is performed live, it’s still undoubtedly an invention of computer music in its own right, through the computing used to create it—its programming and engineering, months meticulously computing the right gear ratio (etc.) to make this technological wonder play.
More recently, in 2017 (and functioning as of 2019), they began work on a more-mammoth Martin-Molin-made metallic musical marvel: the Marble Machine X (even more advanced, complex, and steampunk-looking!).
Overall, Wintergatan’s Marble Machine blurs the line between live instrument and technology, person-powered and pre-programmed, contraption and computer.

 

While it was built through computing, programming, and engineering, one could argue that the Marble Machine could be considered a rudimentary computer itself. While it has no hard drive, screen, or monitor, it seems to be an absurd notion at first, but is in actuality a sound argument. The definition of a “computer” is simply a programmable machine or device, an automated complex system able to run a set program and store, retrieve, and process data. What is the Marble Machine if not a form of that? (What is a calculator if not an advanced abacus?…)
The dropping of thousands of marbles in the Marble Machine—essentially, each part automatically detecting one of two binary options (“marble on” or “marble off”)—is, in a way, similar to the detecting of millions of “on” and “off”, 1’s and 0’s, of the physical systems that run every modern-day computer (including the ones used to program the Machine itself!).
This train of thought challenges our perceptions of technological labels: similarly, one might call an abacus a form of calculator, a pinball machine a form of gaming console, or a typewriter a form of word processor (or, to Wintergatan, a percussive instrument).

 

Wintergatan uses technology not only in their music, performance, instrumentation, and engineering of new instruments— but also to reach a generation of new musicians. Their Marble Machine music video has over 152 million views on YouTube, and millions more on other sites. They’ve since inspired a whole new generation of crafty musiciengineers (side note: genuinely proud of that—I can’t believe no one’s come up with or copyrighted it!) around the globe, through their use of technology, including a recent fan-created miniature Marble Machine in 2020 which creator Love Hultén lovingly dubbed the “Marble Machine XS”.

Before it was eventually returned to them, their Marble Machine was held in Utrecht, the Netherlands, in the Museum Speelklok — a musical museum specializing in self-playing instruments, including a 100-year-old “self-playing violin”. After showcasing this specific “Orchestrion” from 1907 (the Hupfeld Phonoliszt Violina) in a video that’s since garnered millions of views, Wintergatan introduced to a wider audience (and rejuvenated interest in) this obscure 100-year-old instrumental contraption. They used modern technology to bring a piece of centuries-old technology into the public eye in the modern age.

 

Wintergatan’s use of technology in music — be it in combination with live instruments, …in computing the creation of new instrumental technology, …or in sharing their creations globally and inspiring new technological musicians — is innovative and undeniable.

…It’s truly marbleous.

 

SOURCES:

Rhodes, Margaret.  (March 10, 2016).  Insanely Complex Machine Makes Music With 2,000 Marbles. 
In Wired.  Retrieved from https://www.wired.com/2016/03/insanely-complex-machine-makes-music-2000-marbles/

Rogerson, Ben.  (March 3, 2016).  This marble-powered music-making machine is insane but amazing.
In MusicRadar. Retrieved from http://www.musicradar.com/news/tech/this-marble-powered-music-making-machine-is-insane-but-amazing-635336

Rundle, Michael.  (March 8, 2016).  This incredible music machine is powered by 2,000 marbles. 
In Wired, Wired UK.  Retrieved from Web Archive:  http://web.archive.org/web/20160602011021if_/http://www.wired.co.uk/article/marble-machine-video
Rundle, Michael & Woollaston, Victoria.  (March 16, 2017).  16 months to build, two hours to demolish: watch the Marble Machine being taken apart.  In Wired, Wired UK.  Retrieved from https://www.wired.co.uk/article/marble-machine-video

Coetzee, Gerrit.  (March 3, 2016).  Incredible Marble Music Machine.  In Hackaday.  Retrieved from https://hackaday.com/2016/03/03/incredible-marble-music-machine/

Lillywhite, James.  (March 3, 2016).  Wintergatan Marble Machine: Amazing video shows music box powered by 2,000 marbles.  In IBTimes. Retrieved from https://www.ibtimes.co.uk/wintergatan-marble-machine-amazing-video-shows-music-box-powered-by-2000-marbles-1547374

Merriam-Webster. (n.d.). Computer. In Merriam-Webster.com dictionary. Retrieved September 12, 2020, from https://www.merriam-webster.com/dictionary/computer

Bennett, James II.  (August 20, 2017).  This Self-Playing Violin Is a Musical Marvel.  In WXQR, WQXR Editorial. Retrieved from https://www.wqxr.org/story/self-playing-violin-musical-marvel/

Waters, Michael.  (June 14, 2017).  The Self-Playing Violins That Mastered Chopin.  In Atlas Obscura.  Retrieved from https://www.atlasobscura.com/articles/phonoliszt-violin-self-playing-instruments-player-piano-ludwig-hupfeld

Arblaster, Scott.  (August 5, 2019).  Wintergatan return with the Marble Machine X, quite possibly the most beautiful groovebox ever.  In MusicRadar.  Retrieved from https://www.musicradar.com/news/wintergatan-return-with-the-marble-machine-x-quite-possibly-the-most-beautiful-groovebox-ever

Neira, Juliana.  (August 17, 2020).  Love Hultén’s Marble Machine XS makes music as mini marbles drop.  In Designboom. Retrieved from https://www.designboom.com/design/love-hulten-marble-machine-xs-08-17-2020/

 

Videos by Wintergatan:

Wintergatan – Marble Machine (music instrument using 2000 marbles): https://www.youtube.com/watch?v=IvUU8joBb1Q

How It Works – Part 1 (Wintergatan Marble Machine): https://www.youtube.com/watch?v=uog48viZUbM
How It Works – Part 2 (Wintergatan Marble Machine): https://www.youtube.com/watch?v=p0Guq7vZb_E

“Valentine” Live Steadicam One-take performance: https://youtu.be/FHjdI7l3WDc?t=321

Modulin (How does THE MODULIN work? – DIY Music Instrument): https://www.youtube.com/watch?v=MUdWeBYe3GY

100 Year Old Self-Playing Violin: https://www.youtube.com/watch?v=xs0mP2cOmJs

Boys ‘n’ Bells: IRCAM, Jonathan Harvey, and “Mortuos Plango, Vivo Vocos”

“ThE comPuteR cAn dO beTter thAn tHis!”

Thus spake Bell Labs scientist John Pierce and “father of computer music” Max Mathews after a piano concert, feeling rather lofty and ambitious — not to mention disdainful of the centuries-old Western Art Music tradition (“INART 55 IRCAM”). But these quirky scientists were on a mission to make computers extend and exceed human musical capabilities.

Max Mathews, from https://upload.wikimedia.org/wikipedia/commons/thumb/1/1c/Mathews260.jpg/220px-Mathews260.jpg

The pioneering research on sound, music, and computers in the 20th century embodied the spirit of bringing “traditional” human and acoustic into conversation with the limitless capabilities of computers. Composers and musicians collaborated with technical experts at institutions such as IRCAM in France. One such composer, Jonathan Harvey, composed “Mortuos Plango, Vivos Voco,” a haunting tour de force that melds the sounds of a boy’s voice and the bells at Winchester Cathedral (“Mortuos Plango, Vivos Voco by Jonathan Harvey”). Despite IRCAM’s reputation as “an esoteric research programme,” the piece was hailed as an effort of IRCAM that actually yielded “music capable of appealing to a wider audience” (Downes 22). And to bring things full circle, the composition was coded in MUSIC V, an innovation of the notorious aforementioned Max Mathews (1926-), as well as CHANT, an invention of IRCAM.

IRCAM: Institute for Musical and Acoustic Research and Coordination

IRCAM, from https://www.ircam.fr/static/src/assets/img/ircam_card_facebook.jpg

In 1970, French president Georges Pompidou invited eminent French conductor and composer Pierre Boulez to head IRCAM, a brand new institute for musical research and creation in three subterranean floors of the Centre Pompidou in Paris (“WWW Ircam: History”). Deep down in the steel building, technicians, designers, and composers labor at their computers in acoustic caves in gray corridors (NPR.org).

Since its opening in 1977, ICRAM has hosted composers such as John Cage, Karlheinz Stockhausen, Terry Riley have worked there (NPR.org) and resident groups such as Ensemble Intercontemporain (Ensemble intercontemporain). 

Boulez, long interested in electronic music, worked at IRCAM to produce a piece that featured real-time interaction between musicians and computers resulting in a coherent and unified sound. His piece “Répons” (1981), a breakthrough in real-time digital audio processing, fed the sounds produced by six soloists spaced around the concert hall into a computer that recombined them with the sound of the group of 21 musicians on stage. It used IRCAM physicist Giuseppe Di Guigno’s 4X synthesizer, “abstracted the idea of oscillators and interconnection to objects and algorithms that could be linked” and was used as a universal machine for signal processing.

The machine room (1989), from https://upload.wikimedia.org/wikipedia/commons/thumb/b/b3/IRCAM_machine_room_in_1989.jpg/220px-IRCAM_machine_room_in_1989.jpg

IRCAM researchers created several software programs, including Modalys, for synthesis via physical modeling; Max, for real-time processing of interactions between computer and performer; Spatializater, used for concert hall acoustics, and OpenMusic, a visual programming software program significant in computer-assisted composition (“WWW Ircam: History”); and CHANT, which simulated the vocal tract to synthesize the human voice, based on the formant frequencies of vocalists and extremely intensive computations (“INART 55 IRCAM”).

IRCAM is still going strong. IRCAM sound engineer Olivier Warusfel lists two projects pursued in the 2010’s: augmented instruments that transform the sound of live human playing in real time, and wave field synthesis (WFS), which uses carefully placed loudspeakers to remedy the problem of too-quiet “dead spots” in concert halls (NPR.org).

Jonathan Harvey: the man

Jonathan Harvey, the man himself. From BBC https://www.bbc.co.uk/staticarchive/b26315ce71a298d90bf316d9e0538045ff485b2c.jpg

Jonathan Harvey (1939-2012) was a frequent IRCAM collaborator. He largely shared Boulez’s belief that musical culture had become a conservative “museum” culture in desperate need of development beyond the instruments of the late nineteenth century. Harvey, an Englishman, contrasted the tepid pursuit of electroacoustic music in the UK with the “overdue liberation” he found in Boulez’s government-sanctioned institution (Downes 21).

Perhaps the appeal of Harvey’s computer music lay in his spiritual and humanistic inclinations. He grew up a chorister at St. Michael’s College, Tenbury, where he “came to love the Anglican liturgy and its musical tradition,” and he later found through his reading and meditation on Hindu and Buddhist sacred texts that “ancient prayers and visions [were] completely consonant with electric sound.” He drew inspiration from Britten, Schoenberg, Messiaen, and Stockhausen, and spent an academic year at Princeton developing a notion of harmony and modality that evoked unique and non-Western atmospheres. His revelation as a composer came from working at IRCAM and delving into spectral music and computer techniques (Griffiths).

The bell ringers’ chamber with 14 bells, tower tour, Winchester Cathedral

The winchester cathedral bells, from https://www.flickr.com/photos/hilofoz/6281233835/lightbox/

Mortuos Plango, Vivos Voco: boys ‘n’ bells

Harvey’s masterwork, “Mortuos Plango, Vivos Voco” (1980), features eight sections based on the eight lowest partials in the inharmonic series of the Winchester Cathedral Bells, which are developed and intermingled with the voice of Harvey’s son, a chorister at the cathedral. Chords were comprised of the thirty-three partials of the bells. Although the fundamental was C, the note F played prominently in the bells’ inharmonic series, creating an atypical and otherworldly sonority. Between sections, eerie glissandi transition one area of the spectrum to the next (Manning 200).

Jonathan Harvey – Mortuos Plango Vivos Voco (1980) from Andrey Smirnov on Vimeo.

Harvey describes the inspiration for the piece:

On this huge black bell is inscribed in beautiful lettering the following text: HORAS AVOLANTES NUMERO, MORTUOS PLANGO, VIVOS AD PRECES VOCO (I count the feeling hours, I lament the dead, I call the living to prayer). The bell counts time (each section has a differently pitched bell stroke at its beginning): it is itself a ‘dead’ sound for all its richess of sonority: the boy represents the living element. The bell surrounds the audience; they are, as it were, inside it: the boy ‘flies’ around like a free spirit. (“Mortuos Plango, Vivos Voco by Jonathan Harvey”)

Cover artwork of Schiller’s “Song of the Bell,” from https://upload.wikimedia.org/wikipedia/commons/thumb/6/69/Liezen_Prachteinband_Schillers_Glocke_01.jpg/338px-Liezen_Prachteinband_Schillers_Glocke_01.jpg

Aiming to enhance the effect of the deadness of the bells and the sprightliness of the boy, Harvey designed the work for an “ideal cube of eight channels” where the listener is immersed within the sound of the bell as the boy’s voice flies around (Emmerson 157-8). At IRCAM, Harvey’s recordings of the bells and of his son were manipulated and “cross-bred with synthetic manipulations of the same sounds.” This digital manipulation allowed for a shift between the bell spectrum and the boy’s voice, and for a harmonic structure based entirely on the bells’ inharmonic series (“Mortuos Plango, Vivos Voco by Jonathan Harvey”). This approach aligned with Harvey’s aesthetic desire to create an ambiguous musical nether zone: one fitting neither in live-player nor loudspeaker music, but merging the two (Downes 23).

Jonathan Harvey’s analysis of the bell spectra, from https://upload.wikimedia.org/wikipedia/commons/0/0a/Jonathan_Harvey_-_Winchester_Cathedral_bell_spectrum.png

I will close with Harvey’s inspiring reflection on the deeply humanistic potential of computer music. In short, the power of computer music is that it is both limited and enabled by the imaginations of us humans:

In entering the rather intimidating world of the machine I was determined not to produce a dehumanised work if I could help it, and so kept fairly close to the world of the original sounds. The territory that the new computer technology opens up is unprecedentedly vast: one is humbly aware that it will only be conquered by penetration of the human spirit, however beguiling the exhibits of technical wizardry; and that penetration will neither be rapid or easy. (“Mortuos Plango, Vivos Voco by Jonathan Harvey”)

 

Bibliography

Downes, Michael. Jonathan Harvey: Song Offerings and White as Jasmine. Ashgate Publishing, Ltd., 2009.

Griffiths, Paul. “Jonathan Harvey, Modernist Composer, Dies at 73.” The New York Times, December 6, 2012, sec. Arts. https://www.nytimes.com/2012/12/07/arts/music/jonathan-harvey-modernist-composer-is-dead-at-73.html.

Manning, Peter. Electronic and Computer Music. Oxford University Press, 2004.

Emmerson, Simon. Living Electronic Music. Ashgate Publishing, Ltd., 2007.

Ensemble intercontemporain. “A Soloists Ensemble.” Accessed September 11, 2020. https://www.ensembleintercontemporain.com/en/a-soloists-ensemble/.

“INART 55 IRCAM.” Accessed September 8, 2020. http://www.personal.psu.edu/meb26/INART55/IRCAM.html#.

NPR.org. “IRCAM: The Quiet House Of Sound.” Accessed September 8, 2020. https://www.npr.org/templates/story/story.php?storyId=97002999.

Vimeo. “Jonathan Harvey – Mortuos Plango Vivos Voco (1980).” Accessed September 11, 2020. https://vimeo.com/262625848.

“Mortuos Plango, Vivos Voco by Jonathan Harvey.” Accessed September 11, 2020. https://www.bbc.co.uk/radio3/cutandsplice/mortuos.shtml.

“WWW Ircam: History.” Accessed September 8, 2020. http://web4.ircam.fr/62.html?&L=1.

“WWW Ircam: Research.” Accessed September 8, 2020. http://web4.ircam.fr/recherche.html?L=1.

The Extraordinary Evolution of Video Game Music (and the Brilliance of Nobuo Uematsu)

Just like most kids of my generation, I grew up playing video games, whether it be on the Playstation or Nintendo DS. Strangely, one of my favorite parts of playing was always the music. Whether it be the simple, catchy earworm of the Super Mario Bros. theme, or the epic, sweeping score of Halo, I’ve always been fascinated by all that video game music achieves, both technologically and compositionally. To me, that is why video game music, including groundbreaking composers like Nobuo Uemats, has played an integral role in computer music history—not just because of all the technological innovation to overcome the limitations of early video game hardware and software, but also the compositional creativity that arose as a result of these limitations.

 

Video game music, while perhaps most closely related to film scores, has inherent limitations in terms of the its medium, besides all technical restrictions. For one, the music needs to actively involve and interact with the player, and move dynamically without clear beginnings or ends. It needs to smoothly transition between themes when a player changes areas, or when the tone of the game shifts, with continuous looping often being employed to this effect.

This dynamic quality first came in the form of straight up sound effects, beginning with the game Pong in the early 1970s, which relied on a distinctive, onomatopoeic to punctuate gameplay. Up to this point, most video games were silent, and sound was often a forgotten element of game design. But soon, music became a subtle but powerful tool to manipulate a player’s mood and overall gaming experience. This could be seen in Space Invaders, which had music that gradually sped up as aliens got closer to increase tension and the players’ heartrates, or in Tetris, which took inspiration from a Russian folk song to create a looping theme that soon became inseparable from the game itself.

Despite this, video game music was still significantly limited by technical restrictions. Every single note of the music had to be transcribed directly into the computer code, requiring close coordination between programmers and composers. Limitations in memory made it so that composers had to find ways to loop the music without making it too repetitive, often putting the same theme into new keys and registers. Limitations in tonal range led to an avoidance of harmony, or the use of less common intervals such as minor seconds. 

 

But the true transformation of video game music from mere sound effects to, well, music came in 1985, with the Nintendo Entertainment System (NES). It included a custom sound chip that was equipped with 5 channels, and Japanese composers immediately seized this opportunity to define the sound of video game music forever. Koji Kondo created a simple but instantly iconic melody for Super Mario, while Koichi Sugiyama experimented with more orchestral harmonies for Dragon Quest. And finally, in 1987, Nobuo Uematsu composed the music for Final Fantasy, instantly revolutionizing video game music for decades to come.

Kondo’s music was completely melodic, with no harmony. Sugiyama sacrificed melodic movement for richer, but more stagnant harmonies. Uematsu found a way to put melody and harmony together, channeling powerful emotion and creating themes that instantly stuck in your head. His style was eclectic, drawing on everything from John Williams’ cinematic flair, to the classical undertones of Bach, to the 70s rock stylings of Led Zeppelin and Pink Floyd, to a random smattering of jazz and even Celtic Music. 

And this was all under the limitations of the early 5 channels: two pulse waves for melody, a triangle wave for simple bass and percussion, white noise for more metallic percussion, and digital sampler for other assorted effects. Uematsu’s creativity could be seen in the “Prelude” of Final Fantasy I, which was composed at the last minute to fill in an additional scene. He uses audio tricks to simulate the feeling of background chords, and also uses an ⅛ second delay in one of the channels to create the illusion of texture and various voices.

https://www.youtube.com/watch?v=8wZoJQABWI8

By 1991, with the release of the Super Famicon with 8 sound channels, Uematsu took full advantage. With increased memory and more volume control, he began to demonstrate more versatility in Final Fantasy IV. He began to incorporate distinct motifs for characters and even abstract themes, using live orchestral instruments such as the harp (with a slight Celtic flair). From the elegant, delicate “Theme of Love,” to the thumping, chaotic “Battle with the Four Fiends,” Uematsu had begun to build an entire world out of his music alone.

In 1994, the CD-ROM disk was introduced to the PlayStation, with a whopping 24 sound channels. With this new CD audio, along with other improved technologies of 3D graphics and Full Motion Video (FMV) in the corresponding game, Uematsu released his magnum opus with Final Fantasy VII. This is one of the most famous entries in the series, with “One-Winged Angel” resembling a rock opera with its booming percussion, newfound textures in the strings and horns, and an epic chorus behind it all. It’s epic, dramatic, and outstanding as a technological feat and a work of art, outside of any video game.

https://www.youtube.com/watch?v=t7wJ8pE2qKU

But even with all of this pomp and fancy technology, Uematsu never lost track of what made his music compelling in the first place. Because once all the drums and choirs are stripped away, at the core of his music, despite being completely electronic, lies a sense of human emotion, which is best displayed in “Aerith’s Theme,” a beautiful blend of classical opera, cinematic flair, and Celtic influences. It’s sparse and deceptively simple, evoking feelings of nostalgia and wistfulness, as well as an uneasy tension between light optimism and dark foreboding. 

https://www.youtube.com/watch?v=fIqKWLkm2-g

Despite having world-famous pieces, performed live all over the world for enormous crowds by groups like the LA Philharmonic, Nobuo Uematsu extraordinarily achieves, time and time again, the original purpose of video game music, stemming all the way back to Pong and Space Invaders: to make players, sitting alone in their room in the middle of the night, feel something. 

https://www.youtube.com/watch?v=a3LkKviuGKU

Sources:

Fritsch, Melanie. “History of Video Game Music.” Music and Game, 2012, pp. 11–40., doi:10.1007/978-3-531-18913-0_1.

McDonald, Glenn. “A History of Video Game Music.” GameSpot, 28 Mar. 2005, 4:44 PST, www.gamespot.com/articles/a-history-of-video-game-music/1100-6092391/.

O’Bannon, Ricky. “The Musical DNA of Video Game Music.” Baltimore Symphony Orchestra, 2017, www.bsomusic.org/stories/the-musical-dna-of-video-game-music/.

Seabrook, Andrea. “The Evolution of Video Game Music.” NPR All Things Considered, 13 Apr. 2008, www.npr.org/templates/story/story.php?storyId=89565567. Accessed 11 Sept. 2020.

Williamson, Jason. “The Lasting Impact of Nobuo Uematsu and the Music of Final Fantasy.” The Line of Best Fit, 6 July 2020, www.thelineofbestfit.com/features/articles/nobuo-uematsu-music-final-fantasy.

Pierre Schaeffer and Musique Concrète: The Origins of Sampling in Modern Music

Nothing is more ubiquitous in the music we listen to today than the practice of sampling—the reuse of a portion of a song or other audio clip in a recording. From the image-invoking cash register and gun sounds in M.I.A.’s “Paper Planes,” to Nikki Minaj’s allusion to Sir Mix-a-Lot’s “Baby Got Back” in her “Anaconda”, to the instrument libraries which back nearly every song produced today, sampling defines the sound the music around us. 

The ability to splice together different audio recordings is a direct consequence of the development of computer music in the twentieth century. No other person deserves more credit for this advancement than Pierre Schaeffer, often considered the grandfather of sampling technology (Ankeny).

Pierre Schaeffer in his audio lab (c. 1948) (source)

Born in 1910 in France, Shaeffer wasn’t a trained musician or composer, but a radio engineer (Ankeny). Around 1948, while working at the Studio d’Essai (“Experimental Studio”) at the French Radio-Television system Radio-diffusion et Télévision Française, he invented a composition technique called musique concrète, or “concrete music,” in which tape recordings of natural, everyday sounds would be altered, cut, and spliced together to form novel music (Britannica Musique, Britannica Pierre). A talented interdisciplinary scholar, Schaeffer created musique concrète as a marriage of musical creativity and cutting-edge engineering in order to challenge traditional conceptions of composition and to advance electronic music technology (Patrick). 

On the creative side of things, Schaeffer sought to break down traditional conceptions of music, rather than producing notation for an assumed set of instruments and tonal system. Instead of relying on musicians to actualize his creative ideas, he pioneered a “backwards” approach to music, allowing the sounds around him to inspire and define his compositions (Henshall, Patrick). He was particularly fascinated by a concept he dubbed “reduced listening”, in which he would extract the rhythmic, musical aspects of familiar sounds from their common associations (Patrick). Instead of hearing a train roll into a station, he wanted to highlight the syncopation of the engine and the steam whistle, and the tonal dissonance of the crowd. This idea of “hearing without seeing” and “listening without reference” was first realized in Schaeffer’s piece “Etude aux chemins de fer,” where altered locomotive recordings create a jarring, unfamiliar soundscape (Patrick). In this first draft of musique concrète, Schaeffer struggles with his vision, at times evoking interesting progressions, while at others, seemingly playing back odd assortments of still-familiar sounds.

(alt link: https://youtu.be/N9pOq8u6-bA)

In order to dissociate sounds from their typical associations, Schaeffer pioneered many techniques to profoundly alter, rearrange, and assemble tape recordings. His approach consisted of three main components: acquisition of samples, sound manipulation, and tape aggregation (Musique britannica). First, a large assemblage of sounds would be recorded from the natural world. With samples ranging from kitchen utensils to fruits, these initial recordings would inspire the eventual tone of the piece (Howell). Then, the sounds would be altered by any number of complex means. Samples would be reversed, sped up, cut or extended, pitch altered, filtered, or even played and re-recorded in an echo chamber to add reverb (Britannica musique). In addition to the standard tape manipulation tools of the time, including mixers, a direct disc cutting lathe, and physical cutting and rejoining, Schaeffer invented several novel machines of his own to achieve these distortions (Patrick):

  • the morphophone, a delay and loop machine with ten tape heads
  • The phonogene: a device that could play a sample at multiple pitches with a keyboard
  • Amplification systems for experimentation with spatial sound
  • a triple-track tape recorder
The morphophone (source)
The phonogene (source)

Once the sounds had been altered, Schaeffer would physically splice segments of tape to form a single track. Tapes would be joined at different angles to create different “crossfades” between sounds, and the resultant long reels would be routed around improvised objects in order to feed them into the reader (Howell). These techniques involved a great amount of technical skill and precision.

Diagram of an oversized tape loop (Howell)
Crossfading, achieved by splicing segments of tapes at different angles (Howell)

In 1949, Schaeffer joined forces with Pierre Henry, a classically trained composer, to create the first great work of musique concrète, “Symphonie pour un homme seul” (Britannica musique). Recorded entirely from sounds produced by the human body, this ten-movement symphony exemplifies Schaffer’s “reduced listening” at the peak of his technological advances.

(alt link: https://youtu.be/MOYNFu45khQ)

While Pierre Schaffer’s music is certainly a far cry from modern pop and rap, the technological implications of musique concrète are lasting. As Jean-Michel Jarre, a former student of Schaeffer’s, said in 2007: “Back in the ‘40s, Schaeffer invented the sample, the locked groove — in other words, the loop […] It was Schaeffer who experimented with distorting sounds, playing them backwards, speeding them up and slowing them down. He was the one who invented the entire way music is made these days” (Patrick).

Works Cited

Ankeny, Jason. “Pierre Schaeffer: Biography & History.” AllMusic, www.allmusic.com/artist/pierre-schaeffer-mn0000679092/biography.

The Editors of Encyclopaedia Britannica. “Musique Concrète.” Encyclopædia Britannica, Encyclopædia Britannica, Inc., 27 Apr. 2018, www.britannica.com/art/musique-concrete.

The Editors of Encyclopaedia Britannica. “Pierre Schaeffer.” Encyclopædia Britannica, Encyclopædia Britannica, Inc., 15 Aug. 2020, www.britannica.com/biography/Pierre-Schaeffer.

Howell, Steve. “The Lost Art Of Sampling: Part 1.” Sound on Sound, SOS Publications Group, 1 Aug. 2020, www.soundonsound.com/techniques/lost-art-sampling-part-1.

Patrick, Jonathan. “A Guide to the Godfather of Sampling, Pierre Schaeffer.” The Vinyl Factory, The FACT Team, 10 June 2019, thevinylfactory.com/features/introduction-to-pierre-schaeffer/.

 

Splicing Tradition and Innovation: A Case For the Inclusion of Japan in the History of Computer Music

Japan in recent history, beyond being typecast for acclaimed culinary exports and kawaii culture proliferation, has generally found itself widely associated with sprawling conurbations and images of ultra-efficient metropolises, fast becoming the archetype of a country at the forefront of technological innovation. Be it the standardization of the ingenious “washlet” commode (Image Source) or the advent of the Walkman, the country of 127 million has lived up to its billing.

Japanese innovation at its very finest

Indeed, the Japanese Computer Music landscape is no different; Japanese creatives have played a crucial role in the history of computer music, making great strides within the electroacoustic genre and drawing on distinctly Japanese classical influences to create a period of growth analogous to the efforts of those at Bell Labs or IRCAM around the same time.

We start in the 1950s, when European influences started to pervade Japanese music, most notably through the arrival of musique concrète, a compositional technique pioneered by Pierre Schaefer in World War II which employs tape manipulation, utilizing discrete, “concrete” segments of audio spliced together to create ambient textures.

In 1953, 24-year-old Toshio Mayuzumi (Image Source) arrived with “Les Oeuvres pour musique concrete x,y,z”, a stirring piece that combines wartime influences with more traditional-sounding instrumentation in a 3-part composition. I believe that this paved the way for Japanese electronic music at large by immediately entering the cultural mainstream via public broadcasting and radio coverage. The NHK studio, where he composed his early musique concrète compositions, became a home to avant-garde Japanese collectives like the Jikken Kōbō (Experimental Workshop) who wholly avoided traditional Japanese techniques in their compositions, rubbing shoulders with creatives like John Cage across the Pacific. Though the initial approach was to reject tradition, I find computer music in the region, and by extension, the history of computer music in general, is incomplete without the integration of Japan’s rich cultural heritage, which started to happen later in the decade.

While Japanese creatives were not directly responsible for developments like the advent of tape recording, multi-tracking and transistor radios as seen in the presentation, such Western developments catalyzed further growth in the Japanese electronic music scene and formed the foundation of classical fusions coming out of NHK studios. Composers and instrument-makers started to shift towards integrating Japanese musical heritage into computer music – cases in point include the building of a koto with an electronic amplifier, and an electronic shamisen. It is worth mentioning that neither of these stringed instruments follow 12-tone equal temperament, making their advent particularly significant given that we tend to consider computer music primarily within a Western context. This introduced a new angle to how we approach tonality in computer music.

Traditional Japanese-influenced computer music reached something of an apex at the Tokyo Olympic Games in 1964, when Mayuzumi’s Campanology, an overture that employed the use of recorded bonsho bell sounds, was played at the opening ceremony. The piece married products of European influences and musique concrète with traditionalism, a big statement for experimental music in East Asia.

Equally significant to Japanese composition, however, is what was going on the ‘computer’ side of things. Toshiba engineers in the early 1960s sought to experiment with the TOSBAC computer, birthing the TOSBAC suite (a less than subtle homage to the Illiac suite discussed in the presentation). Meanwhile, growing institutional focus on the experimental use of computers in music composition brought more R&D to light. By the time the 80s rolled around, Yamaha and Roland were developing various new and improved synthesizers out of Japan; the significance of inventions like Yamaha’s DX7 began to transcend the bounds of computer music, pouring into popular music and starting to influence music production as whole.

The Yamaha DX-7, 1983, image source

From Jikken Kōbō to Mayuzumi to the digitization of traditional instrumentsJapan has brought both technological innovation and cultural relevance into the computer music narrative, very much warranting a comfortable place in electroacoustic folklore. Computer music is experimental at its core; it’s about challenging boundaries and harnessing technology to create things that musical convention tells us we should not be able to create – the Japanese have exemplified this throughout the last 70 years, all with a healthy respect for tradition.

 

Works Consulted:

Takehito Shimazu. (1994). The history of electronic and computer music in Japan: Significant composers and their works. Project MUSE. https://muse.jhu.edu/article/585311/pdf

Tate. (n.d.). Jikken Kobo (experimental workshop) – Art termhttps://www.tate.org.uk/art/art-terms/j/jikken-kobo-experimental-workshop

Corran. Le Japon et la musique concrète : les stratégies proposées par Mayuzumi Toshirō et Shibata Minao pour s’affranchir de la pensée de Pierre Schaeffer au début des années 1950. (n.d.). Archive ouverte HAL. https://hal.archives-ouvertes.fr/hal-01588787/document

Toshiro Mayuzumi. (n.d.). Encyclopedia Britannica. https://www.britannica.com/biography/Mayuzumi-Toshiro

https://www.youtube.com/watch?v=KxQ7LSw7BWA&feature=youtu.be

Max Mathews, Mortuos Plango, and IRCAM

Max Mathews

Trailblazer in electronic music

The background of the pioneer of sound synthesis

Mathews contributed greatly to the advancement of electronic music with his groundbreaking creations such as the MUSIC-N (from I to V) programs and the GROOVE. How in the world was he able to take such large strides in electronic music production?

One would likely assume that lifelong familiarity with musical instruments would be logically connected to the creation of Mathews’ works, but that is not the case. Mathews learned how to play the violin in high school, but he has called it an “inefficient music instrument” because it required much more practice than other instruments (40 hours a day?) and he stated that he would rather learn the computer or instruments he creates.

Max Mathews with Radio Baton
Mathews with the Radio Baton and the computer Image Credit: Computer HIstory Museum

His educational background majorly specialized in electrical engineering. He earned his Bachelor’s Degree at Caltech and received his PhD from MIT, both in electrical engineering. Following his studies, he went on to serve as the director of Acoustical and Behavioral Research Center at Bell Laboratories in 1965. In 1987, he became a professor for the Center for Computer Research in Music and Acoustics at Stanford University.

Max Mathews’s research contributed to the development of several programming languages and programs with applications in music beyond the MUSIC-N such as GROOVE (a system that stored the musical inputs from an external instrument), Graphic 1 (used for graphical sound synthesis with the use of a light pen), and more.

Mortuos Plango, Vivos Voco

Directly translated to “I mourn the dead, I call the living”, this work by Jonathan Harvey used the MUSIC V program by Max Mathews to manipulate recordings’ sounds which allowed Harvey to bridge the two main sounds in the piece: his son’s vocals and the tolling of the bells. This manipulation of sound produces a surreal and haunting effect, especially as the bells and the vocals transform into each other back and forth. The manipulation of sounds was impressive. After recording the sounds used in the piece, they were changed with great possibilities, it has to have the ability to “move seamlessly from a vowel sung by the boy to the complex bell spectrum consisting of 33 partials” in a BCC section on electronic music.

An octophonic piece

Example of octophonic arrangement
Example of octophonic speaker arrangement

Harvey’s work is able to achieve its eerie atmosphere with effective use of octophonic sound, where audio is reproduced using 8 channels, one for each speaker. You may be familiar with 8D audio edits on YouTube. Those use the same concept of octophonic sound to produce a sense of direction of where the sound is originating from. Harvey uses it heavily in his work and is largely responsible for the eerie atmosphere of the work, with the bells tolling across the listener’s ears and the Harvey’s child rapidly moving around the perceived soundstage.

 

 

Institute de Recherche et Coordination Acoustique/Musique

The IRCAM had a big role in advancing electronic music. In fact, it was the institution that had commissioned Jonathan Harvey to create Mortuos Plango. The IRCAM encouraged collaboration among both engineers and composers alike. 

A Musical Powerhouse

“Composers seeking new sound structures with new instruments know that they can find the most powerful equipment, and freshest ideas, at the institute.”

IRCAM Musical Workstation

Prior to the invention of the IRCAM Musical Workstation (IMW), the synthesis of music was often split between two machines. One machine was designated as the orchestra, responsible for the generations of sounds themselves, and the other as the score, responsible for controlling the sound generators. The IMW aimed to bridge this dichotomy between the two machines and enable real-time production of music.

4X System
4X System using several subsystems

Audio Signal Analysis

In addition to the hardware advancements the IRCAM achieved, they also worked on sound analysis methods and worked on developing audio signal analysis. The purpose of audio signal analysis is to extract musical parameters from instruments including pitch tracking. According to a paper written by Cort Lippe, Zack Settel, Miller Puckette and Eric Lindemann in 1991, they worked on tracking a variety of different elements. These include portamento, glissando, trill, tremolo, flutter-tongue, staccato, legato, sforzando, crescendo, etc.

IRCAM is widely known as a place where collaboration between engineers and musicians exchange and produce ideas, and it has undeniably been critical in the advancement of electronic music in the 21st century.

 

Sources

Roads, C., and Max Mathews. “Interview with Max Mathews.” Computer Music Journal, vol. 4, no. 4, 1980, pp. 15–22. JSTOR, i. Accessed 9 Sept. 2020.

Schreiber, Barbara, “Max Vernon Mathews” https://www.britannica.com/biography/Max-Mathews

Harvey, Jonathan. “‘Mortuos Plango, Vivos Voco’: A Realization at IRCAM.” Computer Music Journal, vol. 5, no. 4, 1981, pp. 22–24. JSTOR, www.jstor.org/stable/3679502. Accessed 10 Sept. 2020.

BBC Radio, Cut And Splice 2005. http://www.bbc.co.uk/radio3/cutandsplice/mortuos.shtml

Machover, Tod. (1984) A view of music at IRCAM, Contemporary Music Review, 1:1, 1-10, DOI: 10.1080/07494468400640021

Music Cort Lippe, Zack Settel, Miller Puckette and Eric Lindemann, “The IRCAM Musical Workstation: A Prototyping and Production Tool for Real-Time Computer”

CSIRAC: The First Computer to Play Music

When talking about Computer Music, where else to begin than the very first computer to produce music. The CSIRAC (Commonwealth Scientific and Industrial Research Automatic Computer) was Australia’s very first digital computer, and world’s fifth stored program computer. It was constructed by the CSIR (Council for Scientific and Industrial Research) by a team under the supervision of scientists Trevor Pearcey and Maston Beard. In 1949 it ran its first program and less than 2 years later it played music for the first time.

Picture of Trevor Pearcey next to the CSIRAC (1952)

Initially known as the CSIR Mark 1, the CSIRAC was in its time, at the forefront of technology. It was fast, efficient, and stored a lot of memory. Today the CSIRAC pales in comparison to the advances in computer technology achieved since then. It was a computer that needed to take up the space of an entire room, and by today’s standards was actually slow (1,000 cycles per second) with not very much memory at all (about 3KB of disk memory). To input information into the computer, a form of long hole punched paper was used. These holes would be converted later into text on another machine. Computer scientists added an output speaker to the machine that would make a sound once the computer had finished running the program letting them know the task had been completed. The machine’s output sounds are indeed the sounds that were later used to create music, but making music using them would prove to be a difficult challenge. 

 

Close up of valves, part of CSIRAC Computer

The CSIRAC used mercury acoustic delay lines meaning that a pulse (originally being inputted in the machine using the hole punched paper) would be sent into a memory tube that would then travel back in forth in the tube permitting storage for a number of bits and digital words in a single tube (about 20 memory tubes could function at any time). Unfortunately, musically this posed a problem. Not only was the computer processing information slowly and delayed, but each mercury tube accessed memory at a different time, therefore making it extremely difficult for any task to be performed in a time-crucial manner such as playing music.

The first to put the CSIRAC to the musical test was George Hill, a mathematician with a strong musical background; his mother was a music teacher, his sister a performer and himself possessing the rare ability of perfect pitch. This would prove useful when attempting to create music with the CSIRAC, a machine creating sounds by sending raw pulses through the computer and to the output speaker. The pulses sent would need to be carefully programmed in order to avoid random jabbers of sound. Hill realized that to create music, he would need to send the pulses in a structure to produce a steady pitch. Naturally, this task was easier said than done, considering that each memory access took a different amount of time.

After a long period of trial and error, Hill was finally able to use the CSIRAC to create tunes from already existing songs, such as Colonel Bogey or Girl with Flaxen Hair, and hence become the first computer to play music.

CSIRAC’s rendition of the Colonel Bogey tune

Unfortunately, none of the tunes played were recorded at the time. Since then the CSIRAC use was redirected to other areas and sciences, and was no longer used to play music. However, in 2018 the University of Melbourne used similar hardware to that of the CSIRAC (same mercury tube system) in order to run the old programs. Scientists then read all of the hole punched paper and input all of the data in Hill’s work, which did in fact end up reconstructing the pulses accurately and playing the reproduced tunes. 

The work of the CSIR and George Hill helped discover a whole new way of making organized sounds and tunes. Although Hill did not create an extensive library of music or create new pieces as a composer using the CSIRAC, he was still able to create something that had not been done before, leading to the work of composers such as Lejaren Hiller (Illiac Suite, 1957) or Max Mathews who started looking into computers as a true musical instrument.

Video with additional information about CSIRAC and some footage of the machine and the scientists involved in the work at CSIR

Information Sources:

Busch, Paul. “Pearcey Interview about CSIRAC Music.” YouTube, uploaded by pauldoornbusch, uploaded 28 September 2009, Accessed September 9 2020. https://www.youtube.com/watch?time_continue=261&v=slr75sLhOCs&feature=emb_logo

“CSIRAC: Computing in Australia begins with CSIRO.” CSIROscope, Accessed September 9 2020. https://blog.csiro.au/csirac-computing-australia-begins-csiro/.

“CSIRAC.” Wikipedia, Wikimedia Foundation, Accessed 9 Sept. 2020. https://en.wikipedia.org/wiki/CSIRAC.

“History of Internal Memory Technologies.” Memory and Database, November 18 2006, Accessed September 9 2020. http://memoryanddatabase.blogspot.com/2006/11/history-of-internal-memory.html

ABC Science. “CSIRAC – The first computer to play music

” YouTube, uploaded by ABC Science, uploaded 6 May 2015, Accessed September 9 2020. https://www.youtube.com/watch?time_continue=261&v=slr75sLhOCs&feature=emb_logo.

“CSIRAC, Australia’s first computer” The Science of Everything: COSMOS, August 3 2018, Accessed September 9 2020. https://cosmosmagazine.com/technology/csirac-australia-s-first-computer/.

“CSIRAC – Colonel Bogey” YouTube, uploaded by TortoiseWrath, uploaded 1 Feburary 2016, Accessed September 9 2020. https://www.youtube.com/watch?time_continue=261&v=slr75sLhOCs&feature=emb_logo.

Alan Turing: The ‘father’ of a pre-modern digital music renaissance

Alan Turing is undoubtedly a voice that continues to echo innovation and inspiration in the worlds of invention and mathematics. Benedict Cumberbatch’s portrayal of Turing in “The Imitation Game” allowed the layman to understand the workings behind one of Turing’s renowned inventions – the Bombe – through a more sophisticated narrative. While Turing is hailed as a mathematician, logician, computer scientist, cryptanalyst, philosopher, and theoretical biologist, his contributions to the world of computer music have been highly overlooked.

Alan Turing, 1930s. Found in the collection of Science Museum London. Artist Anonymous. (Photo by Fine Art Images/Heritage Images/Getty Images)

Turing’s contributions to the world of computer music took rise in Manchester in 1948. By now,  Turing  had  already  thought  of  a  ‘type  of  machine  which  had  a central  mechanism and  an  infinite  memory  which  was  contained  on  an  infinite tape’.  Working side by side with Tom Kilburn in the Manchester Computing Machine   Laboratory, Turing   was   able   to   manufacture   and   program   the Manchester Mark 1 –  a large -scale computer.  At this point, Turing was yet to stumble upon the concepts of computer music.

Manchester Mark 1

The Manchester Mark 1 had a loudspeaker which Turing called the ‘hooter’. The loudspeaker  was  initially  installed  to  alert  the  computer’s  operator  in  the  case something  went  wrong.  Soon Turing realized that if correctly programmed, the speaker could produce musical notes.

When  the  ‘hoot’  sound  was  produced  by  the  computer,  it  lasted  for  merely  a second. However, as Turing tried executing the ‘hoot’ sound repeatedly, he noticed that  a  symphony  going  ‘tick,  tick,  tick,  click’  had  been  composed.  Further experimenting  with  the  ‘hoot’  sound,  Turing  realized  that  if  the  sound  was produced in different patterns, different musical notes could be produced. Simply put, Turing had come across a 21st century music pad, where if played around with the notes for a while, he could easily compose a symphony.

Inspired by Turing’s work, Christopher Strachey, a colleague of Turing, showed up to the Computing Machine Laboratory in 1951. Strachey wanted to use the musical notes Turing had come across in a complete composition. Turing tutored Strachey on  how  to  program  the  machine  for  musical  notes  and  gave  him  a  ‘typical high-speed,  high-pitched  description  of  how  to  use  the  machine’.  Continuously debugging and programming the machine, Strachey was able to make the machine play the British National Anthem.  It was these events which put the wheels of Strachey’s career in motion, allowing him to become Oxford’s first professor of computing. If it hadn’t been for Turing playing around with the ‘hoots’, Strachey wouldn’t have been able to make computer generated melodies – one of the earliest advances in the field of computer music.

Turing also managed to produce ‘note-loops’.  He compiled these loops in his note-playing subroutines.  He  started  using  ‘note-loops’  in  order  to  play  around with  loudness  and  timbre.  In  order  to  make  sure  that  the  melodies  did  not  end abruptly, Turing decided to invent what was called a ‘hoot-stop’. He placed lines of code at the end of each musical routine, so that once a routine would finish, only the ‘note-loop’ installed in the last few lines of code continued to play. This is what he called a ‘hoot-stop’. As a result, Turing had managed to overcome the shortcomings of Strachey’s melodies which had abrupt pauses in between.

While the aforementioned examples depict Turing’s hands-on experience with musical composition, they do not include the application of Turing’s Test in the world of musical composition. Turing, who had worked in the field of Artificial Intelligence had composed the Test to determine whether a computer could behave as a human or not.  In  1964  Havass  used  the  Turing  Test  in  a  conference  to  check  whether listeners could differentiate between computer-produced melodies and traditional melodies. The  goal  behind  the  test  was  to  collect  feedback  and  improve  computer-based production  methods  to  suit  human  tastes.  So while Turing had created his Test mainly to delve deeper into the world of AI, it ended up becoming an important distinguisher when it came to musical melodies.

The Turing Test

It   is   commonly   believed   that   the   first   computer-made   musical notes were composed at Bell Labs in 1957.  However  by  looking  at  the  evidence  provided above,  it  can  clearly  be  seen  that  Turing  had  started  such  composition  9  years earlier. While Max Mathews invented a program to compose his notes, Turing built the whole machine. All these facts beg one question. Isn’t Turing the ‘father’ of a pre-modern digital music renaissance? The way I see it, the name does have a “ring” to it (pun intended).

Sources: 

Copeland, B Jack, and Jason Long. “Turing and The History of Computer Music.” Essay. In Philosophical Explorations of the Legacy of Alan Turing 324, 324:189–218. Boston Studies in the Philosophy and History of Science. Boston, MA: Springer, 2017.

Doornbusch, Paul. “Early Computer Music Experiments in Australia and England.” Essay. In Alternative Histories of Electroacoustic Music22, 22:297–307. Special Issue 2. Cambridge University Press, 2017. https://www.cambridge.org/core/journals/organised-sound/article/early-computer-music-experiments-in-australia-and-england/A62F586CE2A1096E4DDFE5FC6555D669.

Ariza, Christopher. “The Interrogator as Critic: The Turing Test and the Evaluation of Generative Music Systems.” Essay. In Computer Music Journal, 48–70. MIT, 2009. https://www.mitpressjournals.org/doi/pdf/10.1162/comj.2009.33.2.48.

Waugh, Rob. “Listen to the First-Ever Electronic Music Track – Made by Pioneer Alan Turing.” Essay, n.d. https://metro.co.uk/2016/09/26/listen-to-the-first-ever-electronic-music-track-made-by-pioneer-alan-turing-6153871/

Dubois, Luke. “The First Computer Musician.” Essay, n.d. https://opinionator.blogs.nytimes.com/2011/06/08/the-first-computer-musician/

 

 

Bell Labs, Miller Puckette, and the Iliac Suite

Where: Bell Labs

Nokia’s Bell Labs is a research institute dedicated to using “telecommunications and information technology” to help “people connect, collaborate, compute, and communicate.” Founded by Alexander Graham Bell in 1925 using the money he obtained the French government through the Volta Prize, Bell Labs has numerous revolutionary products come out of its lab:

  • In the early 1930s, Jansky accidentally discovered radio wave sources from the Milky Way at Bell Laboratories while investigating the interference of transatlantic voice transmissions. 
  • In 1947, John Bardeen and Walter Brattain discovered Transistors when observing that the output power exceeded the input power when gold points were applied to germanium.
  • In 1957, Charles Townes and Arthur Schawlow built upon Joseph Weber’s microwave amplifier to produce the first visible light laser. Townes and Schawlow would file a patent that was disputed by Columbia University for 28 years, after which Columbia was given the patent lawsuit victory. However, the invention of lasers is still widely credited to Bell Labs. 
  • In that same year, Max Mathews wrote MUSIC-N, the first widely used program for sounds generation. MUSIC involved a graphic interface, upon which users could draw using a light pen to create sounds to be used in computer music
  • In 1975, Bell Labs sold its first source license for Unix to Donald Gillies, and professor at the University of Illinois at Urbana-Champaign’s computer science department. 

 

With the COVID-19 pandemic keeping the world alert for future pandemics as well, one of Bell Labs’ current projects is to use lasers to detect changes in a human’s biophysical condition before traditional symptoms begin showing.

Who: Miller Puckette

Miller Puckette is a professor of music known for his creation of the programming language Max, which aimed to help provide a more user-friendly environment for artists to produce electronic music. After graduating from MIT in 1980 and earning a PhD in mathematics from Harvard in 1986, Puckette relocated to the Institute for Research and Coordination in Acoustics/Music (IRCAM) in France, where he created Max. Max was the first real-time audio processing programming environment and designed for the Macintosh, though it did initially have to rely on external hardware when Puckette first created it. It wasn’t until four years later, in 1989 that Puckette developed Max/FTS (Faster than Sound), which finally allowed for real-time audio processing. With lRCAM licensing this updated software to Opcode Systems shortly after, Puckette moved on to create another digital synthesis program called Pure Data by 1996: a program that still works with modern day operating systems including Linux, Mac OS X, Android, and Windows.

Puckette then moved on to be a part of the Global Visual Music project, sponsored by Intel, to continue developing Pure Data into a more friendly real time audio software by using graphics. Yet due to the computational limits of the time, the goal of rendering three dimensional graphics in real time was considered nearly impossible. Since 1994, Puckette has been a professor of music at UCSD.

 

What: Illiac Suite for String Quartet by Lejaren Hiller

Commonly regarded as the first piece of electronic music, Hiller’s Iliac Suite for String Quartet was the result of an experiment done with Leonard Isaacson. Originally a chemist who frequented with computers for his work, Hiller noticed through his analyses of music that there exists “laws of organization” that seemed to govern music such as chord progressions, scales, and triads, and that the “organizational choices” that composers made could be done by a computer as well.

The creation of the Iliac Suite involved three steps: initialization, generation, and verification. During initialization, Hiller defined the conditions under which the computer had creative freedom to create its own music, such as the “prohibition of consecutive fifth and octave parallels.” During the second step called generation, the computer was given freedom over a number of musical parameters such as dynamics (crescendo and decrescendo, arco and pizzicato) and rhythm, with 1000 notes generated per second. The notes were generated according to the Monte Carlo method algorithm. In the verification step, each generated note was compared once again with the set of rules, after which the next note would be verified. Invalid notes which broke the guidelines set would be generated once again. It should be noted here that the first two steps proceeded mathematically, in which the rules of composition refers to certain numerical patterns and the generation of notes actually refers to the generation of random numbers in accordance with the Monte Carlo method. Each verified note is stored in memory. At the end, all notes are then translated into musical notation in order to be performed by real-life instruments.

 

Sources:

Di Nunzio, Alex. “Illiac Suite.” Musica Informatica, 2011, www.musicainformatica.org/topics/illiac-suite.php.

H. Viswanathan and P. E. Mogensen, “Communications in the 6G Era,” in IEEE Access, vol. 8, pp. 57063-57074, 2020, doi: 10.1109/ACCESS.2020.2981745.

Hiller, Lejaren. “Experimental Music; Composition with an Electronic Computer : Hiller, Lejaren, 1924-1994 : Free Download, Borrow, and Streaming.” Internet Archive, New York, McGraw-Hill, 1 Jan. 1970, archive.org/details/experimentalmusi00hill.

Ionescu, Maria. “Nokia Bell Labs: AI Is Being Fitted for a Diving Suit: How Reinforcement Learning Will Power the Transoceanic Cables of the Future.” Bell Labs, 15 May 2020, www.bell-labs.com/var/articles/ai-being-fitted-diving-suit-how-reinforcement-learning-will-power-transoceanic-cables-future/. 

Mandelbaum, Ryan. “Miller Puckette: The Man Behind the Max and Pd Languages (and a Lot of Crazy Music).” IEEE Spectrum: Technology, Engineering, and Science News, 21 Feb. 2017, 20:00, spectrum.ieee.org/geek-life/profiles/miller-puckette-the-man-behind-the-max-and-pd-languages-and-a-lot-of-crazy-music. 

Puckette, Miller. Miller Puckette, msp.ucsd.edu/.

 

The Creation of the Illiac Suite

Named after the computer that created it, the Illiac Suite was a revolutionary piece in the history of music as the first piece of music to be generated by a computer. 

Lejaren Hiller – A creator of the Illiac Suite

Lejaren Hiller was born in New York City on February 23, 1924, and had an early interest in music, learning how to play the piano, oboe, clarinet, and saxophone as a child. In high school, he even already knew how to write orchestral music. Throughout his academic career, Hiller maintained his love for music and composition while pursuing the sciences simultaneously. Hiller attended Princeton University to study Chemistry in 1941 and graduated with a Ph.D. in 1947. During his time at Princeton, he also studied musical composition with Roger Sessions and Milton Babbitt, both accomplished composers in a wide variety of musical genres including orchestral music and more experimental genres.

Hiller, in 1959

Hiller’s professional career was filled with work in both music and chemistry. After obtaining his Ph.D., he landed a job at DuPont (largest chemicals company that created products like Styrofoam, Kevlar, etc) where he worked for 5 years as a research scientist. While there, he continued composing, even finding the effort to run a small concert series. In 1952, Hiller began teaching chemistry at the University of Illinois. While there, he worked towards an M.M. (Master of Music) in composition. This series of events set Hiller up to use the ILLIAC I for the composition of the Illiac Suite in 1956 with Leonard Isaacson, which was a pivotal point in his musical career.

Later on in his life, Hiller would diverge from Chemistry to further his work in computer music, creating more experimental works such as Computer Cantata (1963), Algorithms I-III (1968-72), etc. He would go on to help found the Electronic Music Studio at the University of Illinois and would go on to teach composition at the State University of New York.

Hiller’s life was overall very vivid, with him able to find the connections between the scientific method of chemistry and applying that to methodically creating music. Hiller developed encephalitis in 1987 and passed away in 1994. Hiller mentored many throughout his career and his revolutionary work like the MUSICOMP coding language inspired many generations after him to pursue musical composition.

 

The University of Illinois at Urbana-Champaign-Birthplace of the Illiac Suite

The University of Illinois at Urbana-Champaign was founded in 1867 and is the start of the University of Illinois system. In 1952, the ILLIAC I (Illinois Automatic Computer) was built by the University of Illinois, the first computer built and owned by a US educational institution. This is where Lejaren Hiller and Leonard Issacson lead the programming of the ILLIAC I to create String Quartet #4, aka the Iliac Suite. Now a little bit more about the creation of the ILLIAC I. The original architecture for the computer was extracted from a report from the Institute for Advanced Study at Princeton. The University of Illinois actually built two computers at the same time, but the ILLIAC I’s cousin, the ORDVAC, was delivered to the US Army in exchange for funding for the ILLIAC with the same design. The ILLIAC I was extremely bulky, with 2800 vacuum tubes and weighing around 5 tons. Besides being used to create computer music, this computer was also used to calculate orbital trajectories and was retired when the ILLIAC II was created in 1963. The University would go on to create 5 more ILLIAC computers, with the Trusted ILLIAC completed in 2006.

ILLIAC I

ILLIAC I memory drum
The ILLIAC I
The ILLIAC II

Today, the university serves as a premier research university, with the library system being the second largest in the US and having the fastest supercomputer of all university campuses. The university includes 30 Nobel laureates, 27 Pulitzer Prize Winners, 2 Turing Award winners, and 1 Fields medalists. From the National Center for Supercomputing Applications to the countless discoveries in the computer and applied sciences, The University of Illinois at Urbana-Champaign is one of the top locations for technological innovation.

The Musical Piece

In 1957, the Illiac Suite was generated by Lejaren Hiller and Leonard Isaacson for a String Quartet. When deciding what instruments that the composed music would be played on, technical limits excluded the use of electronic instruments and the piano. Instead, a string quartet was chosen for the ability to create 4 distinct voices that were similar in timbre.

First Movement

The first movement of the Illiac Suite was designed to be polyphonic, with a strict counterpoint for connecting each of the melodies. The rules set for this piece included melodic continuity, harmonic unison, and various rules relating to the transition between the voices. Listening to the piece, the tempo shifts from fast to slow, and then speeds up near the end again, with all four voices present at the end.

Second Movement

The second movement of the Illiac Suite was created with the purpose of showing that musical notation could be encoded. This piece features a four-voice polyphony, but all 4 voices are present the entire piece and form a beautiful soothing harmonization based on the more complex programming available.

Third Movement

The third movement contrasts with the previous two movements greatly, not only being played Allegro con brio, but also having a more chromatic approach that mirrors that of contemporary music. Because the original musical generation was far too dissonant, the researchers added more traditional rules that would smooth out some of the jarring sections. This piece is also unique in using a lot of pizzicato and a greater selection of string instrument sounds.

Fourth Movement

The fourth movement is fundamentally different from the previous movements in that the previous movements utilized traditional rulesets in the generation of the music, whereas the generation of this movement employed the use of Markov chains (where the choice of the next note is completely based on probability and the previous note). Despite not using any standard musical techniques, the music created closely resembles modern contemporary music.

Pros and Cons

The creation of the Illiac Suite garnered a lot of negative criticism, as composers claimed that generating music gets rid of all the joy and pleasure while manually creating music and devalues the time and effort put into composing music. Despite the criticism, it was fascinating how a machine was able to create convincing music that sounded great. However, there were shortcomings of the Illiac Suite. Even though the chords and melodic connection of the piece was close to flawless, the piece lacked a sense of direction and purpose. There wasn’t the intuition to make the piece guided to a climax or have the flow of themes within a piece, as the rules did not include that. Despite that, the ability to generate music based on simple rules is still astounding.

Sources:

The Editors of Encyclopaedia Britannica. “Lejaren Hiller.” Encyclopædia Britannica, Encyclopædia Britannica, Inc., 19 Feb. 2020, www.britannica.com/biography/Lejaren-Hiller.

Ewer, Gary. “What the ‘Illiac Suite’ Taught Us About Music.” The Essential Secrets of Songwriting, 27 May 2013, www.secretsofsongwriting.com/2013/05/27/what-the-illiac-suite-taught-us-about-music/.

“Illiac Suite.” Musica Informatica, www.musicainformatica.org/topics/illiac-suite.php.

“ILLIAC.” Wikipedia, Wikimedia Foundation, 5 Sept. 2020, en.wikipedia.org/wiki/ILLIAC.

Nunzio, Alex Di. “Lejaren Hiller.” Musica Informatica, 2017, www.musicainformatica.org/topics/lejaren-hiller.php.

“University of Illinois at Urbana–Champaign.” Wikipedia, Wikimedia Foundation, 8 Sept. 2020, en.wikipedia.org/wiki/University_of_Illinois_at_Urbana–Champaign.

“University of Illinois School of Music University of Illinois at Urbana-Champaign.” EMS – History – Lejaren Hiller | Music at Illinois, music.illinois.edu/ems-history-lejaren-hiller.