Final Waveform Project: Bronze

 

I hope you enjoyed that mp3 of my track. For my final 035 project, I integrated Supercollider with Waveform to produce some electronic music. I achieved this by generating random(ish) midi data in supercollider and feeding it into waveform for one of my lead synths.

Screenshot of Final Session (its a combination of both of these^)
Screenshot of Final Session (its a combination of both of these^)

Sample Hunting

As per usual, I began by searching through my sample libraries for interesting samples. I was looking for both intriguing atmospheric samples that would inspire my music, as well as atmospheric samples that would provide some nice background noise. I had never experimented with randomness in my music before, so I was a bit unsure of myself. Pretty early on, however, I found that the samples that would work best would be ones that present a musical key, yet weren’t restrictive (they would harmonically work well with every note within that key). I also selected a set of drum and percussion samples, as well as a melodic bell sample I made earlier this year in Logic.

Intro:

After opening Waveform, I dragged my atmospheric samples into the track and began developing ideas about their relative placements in the timeline. Before too long, I decided on a key and matched the BPMs of my samples. Some of the samples I didn’t alter at all, other than adding effects. A couple of plugins I used frequently were Traktion’s Bit Glitter and Melt.

As I mentioned in my previous two waveform projects, I really love these two plug-ins — when used together, they can warm up and almost “break down” a sample. Other samples I put into a multi-sampler patch, chopped up, and used to play melodic motifs using my midi keyboard.

Example of Sample I mapped in Multi-Sampler

I also added some perc loops, some of which I chopped up. The point of this section was to introduce the material with which I’d be working for the rest of the track, as well as set the mood. Thus, towards the end of the intro section, I introduced a pitched down and warmed up version of the main lead melody of the drop — a trimmed vocal sample I tossed in a sampler and played using my midi keyboard. Another notable aspect of this section is a flute line, which I played using a sampler. 

Moving over to SuperCollider

At this point in the process, I moved to SuperCollider to generate semi-random midi. This process was composed of two parts — writing the actual random code, and setting up a line of communication between SuperCollider and Waveform. Professor Petersen shared with me instructions on both of these steps.

In terms of generating random data, I wanted Supercollider to pick notes that were within the scale with which I was working, rather than notes within a given note interval, a slight variation on the code that Professor Petersen shared with us. I achieved this by creating an array including the notes in my preferred scale. Then I used inf.do and .choose to pick midi notes from the array at random. I attached a picture of the code below. 

Setting up a line of communication between SuperCollider and Waveform was rather painless, actually. I sent the output of my supercollider code through the IAC driver and then set the driver as the input for a Subtractive patch. Therefore, I could initialize the code in Supercollider and the generated midi would immediately be heard in Waveform.

The plucky Subtractive patch you here at the beginning and end is the midi I ended up using. I recorded about two minutes worth of midi and picked a section I thought would work well. 

First Drop 

By far, I spent more time on this section than any other. To start, I pulled up my bell percussion sample. I had a rough idea of what I wanted the section to sound like, drawing inspiration from Sam Gellaitry and Galimatias. The sample was already set to 130 BPM, which is the BPM in which I often like to work, so I didn’t have to do much manipulation of the sample other than chopping it at times to create a stutter effect. In terms of effect plugins, I added a bit crusher to make the sample more percussive. Therefore, it wouldn’t clash as much with the other melodic aspects of the track. 

Next, I experimented with some vocal chopping and developed the chop countermelody that exists now. I knew this wasn’t the lead, but rather a complementary element.

Following this, I organized my percussive elements. The drum groove itself is pretty simple — I placed a clap on beats 2&4, and the hi-hats, which I recorded at the end, are sparse. Later on, I wanted more width in the track, so I automated panning the hats left and right. To highlight the bass hits, I coupled my 808s with kicks. I knew I wanted my 808’s to be a focal point of the piece, so I initially opted for a sample that was roundish and had a good amount of higher frequency presence. I noticed in my previous tracks that my subs had unnecessary low-end, so I threw on a high pass filter to do some light correction. A couple of days later, however, I ended up trimming some of the mid and high frequency of the bass. Aside from these melodic and drum elements, I spent a decent amount of time on little embellishments. These included chimes, sub-bass slides, risers, and impacts, which helped fill out the section. 

 

Main Slide Lead

Surprisingly, I didn’t actually write the main slide lead until the very end of working on this section. I tried a lot of different sample chops and leads for the main melody, but none of them stuck until I came across the one you hear now. Initially, the melody was actually an octave down. It became apparent, however, that there was too much mid-frequency clutter, so I moved the melody up an octave. Because of the strange timbre of the lead, moving it up an octave led to some mixing issues. What ended up doing the trick was a high pass filter paired with an eq which took out some mid frequencies and accentuated the highs.

First Drop

 

B Section

In this section of the piece, I tried to depart from the choppiness of the first drop by incorporating two sustained pads, alongside a traditional lead. For my pads, I edited two instances of Subtractive and played a chord progression in the relative minor. I spent some time messing with the filters in subtractive to place the pads correctly in the mix. I also used a variety of effect plugins, including chorus, phaser, and reverb. I used some of these effects purely for their aesthetic, while I used others for their utility in the mix. Stereo fx and widener, for example, made room for key elements in the section (the lead and the bell loop).

Main Lead

I had a lot of fun with the main lead in this section — I was able to use my Arturia keylab, which has a pitch mod wheel. Editing the pitch mod wheel data post-recording turned out to be a bit unintuitive, but I’m happy I incorporated it. I found my patch was a bit boring, so I added a phaser with a very slow rate and very little feedback, which only slightly altered the sound, yet brought it to life. Other than that, the fx rack is pretty standard.

Main Lead
Main Lead
Main Lead

Bass

I decided to switch up the Bass in this B section for sake of variety. Not only did I change the Bass pattern, but I also used a different sample. I chose a subbass that fell more in the low-frequency range than the former, so it wouldn’t compete with my pads.

Other Elements

I brought back the bell loop in this section; this time I boosted some mid frequencies using an EQ to make the loop even more percussive. Other than that, I kept the same basic drum pattern, replacing the clap with a snare for variation. I also reintroduced a sample chop from the beginning to relate the intro to the b section. 

Final Section

Final Pads

I wanted to wrap things up nicely in the last drop by combining elements from both the A and B sections. I kept the same structure and feel as the A section — choppy and fx-focused –while also reintroducing the chord progression from the B section using two new Subtractive patches. The first patch was a high-frequency phaser pad, which has a weird but cool envelope. The second, a standard juno-ish pad I added for strength, for the first pad had a weak tail. At the very end of the track, I reverted back to a mysterious and dark variation of the intro. I primarily achieved this by using reverb and Melt to alter the samples.

Risers: pitch shifting and filter automation

One thing to note is that I had a really fun time experimenting with risers and section transitions. In most cases, I used traditional white noise risers, but I also developed other techniques to complement and even replace this fx. For example, before the last drop, I automated a high pass filter on a synth pad to grow anticipation. This can be heard at minute 3:00. Another technique I developed was pitch shifting. In some instances, I put a pitch shifter over my pad or sample and drew-in a linear upward automation right before the drop, which achieved a makeshift riser of sorts. The last method I used applied the “Redux” filter in Subtractive. I found that by opening a subtractive lead patch and inputting a single midi note, I could then assign and automate a redux filter cutoff, resulting in a grainy, distorted sound, making for an interesting riser. This can also be heard at minute 3:00. Oftentimes, I combined two or more of these methods for my section transitions.

Redux Automation
Pitch Automation

Some Other Things I Did

-side chain compression. I used sidechain on a variety of tracks in this project. I did, however, break the habit of putting sidechain on my bass. As Kenny Beats (and others) has reminded us many times, don’t side chain the 808s.

Final Mix of the Master

I did a lot of mixing alongside my writing process, which paid off in the end. The only manipulations I did to the master were to add a limiter and an EQ, accentuating the high-end and cutting off some low-end. I’ve noticed that my headphones lean towards the brighter side, so I often end up making these tweaks.

I enjoyed producing this track. I was a bit hesitant at first, because the genre of music that I usually attempt to produce doesn’t seem at first glance to be conducive to the type of random midi generation that I know Flume, for example, incorporates in his music. Giving Supercollider a scale from which to choose did the trick, however, and I’m quite happy with how things turned out!

Waveform 2

I hope you enjoyed my track! In this project, I was able to continue to develop some of the techniques that I talked about last time (automating distortion cutoff, using melt plug-in to create warmer, richer sounds, etc.), as well as discover different techniques and experiment with new processes.

INTRO

Opening Chords

The track begins with a simple chord progression, which I recorded on my midi keyboard. For this key/synth instrument I used subtractive with instances of Phaser and a touch of “Melt.” To make the intro a bit more interesting, I intermittently opened up the pad by automating the “FM” parameter of the frequency filter within my Subtractive patch. After an 8-bar intro of the chord progression, I wanted to introduce a motif, which I accomplished with my midi keyboard along with another Subtractive Patch. I also developed the instrumentation sustaining the chord progression. Instead of using the same pad patch in the introduction, I created two more Subtractive patches. The first, a low-mid frequency heavy, grimier, wetter sound, which I achieved by using “Bit Crusher’ and “Melt,” is juxtaposed with the second, a clean, higher energy, dry key-stab, which I achieved with a phaser effect. These two patches hand-off the chord progression to each other every two bars. In order to emphasize the development from the first pad to the two new pads, I kept the intro pad pretty much in the middle of the mix and added an intensive “Stereo FX” widener on the two new patches. This method of devising the chord progression was my attempt at transforming a simple key progression into something fun and surprising. Overall, I’m pretty happy with how it turned out. Other elements in the first section of the project include panned, pitch-shifted hi-hats, snares, panned hard left and right, as well as a round sub-bass, which hits every two bars.

Snare Submix

Second Section

In the second section, I wanted to preserve some elements established in the introduction, while also giving the listener a fresh sound. There are a variety of things going on here, so I’ll provide a broader description of the overall composition process followed by some remarks on some of the sounds one hears.

Composition:

I wanted this section to be high energy; I was striving for a busy, yet uncluttered listening experience. To achieve this, I tried incorporating audio samples that provided different textures, chopping midi sections, and sprinkling melodic embellishments throughout the section. In comparison to the last project, I programmed all of my drums at the end this time. My drums included individual hi-hat, kick, and snare samples, which I processed lightly and arranged. While the kicks and snares are somewhat unexpected at times, I programmed fairly uniform hi-hats to provide some consistency.

Distorted Lead:

The section opens after the impact with a distorted lead, which is actually another Subtractive patch. Although the actual midi information that I programmed is quite simple, the lead sounds a bit intricate because I parameterized a redux filter on the patch, linked it to a wheel on my midi keyboard, and recorded extensive automation to vary the pitch and timbre. The makeshift riser that one hears after the impact at 0:42 is the lead with a very low redux cutoff. As I increase the cutoff, the sound morphs from an atonal, gravely buzz into a distorted, tonal lead. Throughout the entire section, I use the cutoff filter to modulate between the gravely, atonal version of the patch, and the tonal version.

Automation on Distorted Lead

Chord Progression:

With regard to structure, the chord progression is similar to the introduction. I did, however, vary a couple of things. Notably, I replaced the melted pad with an audio sample, which I then pitch-shifted. I applied widener, EQ, and compression to this sample, as I did to all the other pads. The stab synth patch in this section is identical to that in the previous, although I altered a couple of midi notes to better match the audio sample.

Synth Melody:

In this section, I also introduce a new melody to complement the distorted lead. The patch is practically a saw wave – I wanted to keep it very simple. I recorded the midi information with my keyboard and decided to purposefully leave the input unquantized. I quantized information earlier in the track, but something about the simple sound design of the patch made it feel like leaving the line unquantized was appropriate — at least it sounded nice to me.

Bass:

There isn’t too much going on with the bass, sound design-wise. I imported a separate 808 patch and used my midi keyboard and waveform’s multi-sampler to create the line. I EQ’ed out some intense lows and distorted the sound a bit, as well, to make it a bit brighter. In most cases, I aligned my kicks and 808 hits to emphasize the bass hits, but this resulted in the problem of bass and kick frequencies clashing, which I discuss later on.

Post Drop and Outro:

 The third section of this piece could be considered a reflection of the first section with call-backs to the second. I practically repeated the chord progression, bass line, snares, and hi-hats from the first section, as well as reintroduced the motif, mildly tweaking some parameters and effects as needed. Some elements of the second section make appearances, such as the saw melody, which takes form in a new Subtractive Patch for the sake of variation.

Outro

I actually wrote the outro before the middle sections. Inspired by producers, such as Sam Gellaitry and Galimatias, I decided to end the track by taking it somewhere completely new (which in general isn’t always a great idea). The harmonic elements of the outro, therefore, are similar to the other sections, yet the genre/feeling is completely altered. I intended to create some jazz-electronic fusion, which took the form of my incorporating jazzy drums, a piano patch, and a soft, gliding synth melody. I added heavy reverberation to capture a fleeting feeling, as if the music is fading away, or distant.

Mixing and Master

Since there are a lot of elements at play in this project, I took Professor Petersen’s advice of “mixing as I went.” In general, I subscribed to the philosophy of positioning lead melodies, kicks, snares, and bass in the center of the mix, and placing all other elements around them. I added stereo fx plugin on all of my pad elements to further push them to the sides and panned hi-hats. I had a really tough time mixing my bass in the track. I wanted the bass to be present and have a punch without sacrificing clarity, which proved to be difficult. To remedy this, I experimented with distorting the bass and taking out excessive low-end frequencies.

I didn’t do much processing on the master, aside from adding a bit of high end with an EQ, subtracting a bit of low end, and adding a touch of compression.

Some Take-Aways

  • 808’s are really hard to get right
  • Addition isn’t always a net positive: sometimes I get carried away when It comes to adding elements, since I sometimes strive for a choppy, busy track. This often results in a muddy mix. Instead, perhaps I should focus on making sure that what I have already sounds good before adding more.
  • Compress Less.
  • Play your track for a friend: sometimes I incorporate complicated rhythms, which sound fine to me while listening because I project what I think it sounds like, or what it should sound like, onto what it actually sounds like.

 

 

Waveform Project 1

The Track:

 

Although there isn’t a particular “style” or “genre” of music with which I identify, there are definitely similarities between all of my compositions. To that extent, whether consciously or unconsciously, I had a general idea of what I wanted the track to involve: some kind of fusion of acoustic and digital sounds, swung drum beats, murky atmospheres, heavy bass, and distortion. From there, I began the crate-digging process:

The Samples:

At the outset of this project, I spent an hour or so scrolling through different sound libraries. I really enjoy foley and atmospheric samples, so I downloaded a bunch from the SSLIB. I took a variety of samples from the “Household,” pack, the Adobe “foley” and “ambiance” packs, as well as from the “Liquid” pack. For this project, I wrote all the melodic samples in midi —  one melody, one chord progression, and one sub-bass line — and rendered them to audio. I wrote these three samples together at 110 bpm, so I could use them together with ease, though I ended up chopping the individual samples anyway. On top of various percussion samples that I imported, I threw in some snaps, which I recorded in my dorm room. I tried semi-treating my room by hanging my blankets on the walls; the recordings turned out better than I thought they would! For the snaps, I used a lossless recording app on my phone and sent it to my computer. Since it was a wav. file, I could just drag it into my project without any issues.

The Composition:

I started working on this track by programming the two drum grooves. I knew I wanted the first to be swung and have a “pulling” effect, while the second would be a more basic drum pattern. To produce the actual loops, I took the individual kick, clap, snap, and hi-hat samples and placed them in their respective tracks. The claps were fairly easy to do (2 and 4), but getting the kicks right was a bit more laborious. The process involved me looping eight bars, first adding in the claps, and then adjusting each kick as needed.

The first groove (seen below) didn’t have too much processing. I compressed all the drum samples gently to make the beat a bit tighter and slapped widener on the snaps/claps (just because I like the way it sounds.)  I found a jazz drum break sample in one of my sample libraries and put on an apple high-pass filter to get rid of the lower kick frequencies, which were conflicting with the kicks that I already programmed. I left some of the higher kick frequencies in because they added to the rhythm without conflicting with other frequencies (they sound similar to a tom). 

HP Filter on Drum Loop
HP Filter on Drum Loop
HP Filter on Drum Loop
Drum Groove 1

 

The second groove is much simpler (seen below). Again, I put a clap on 2 and 4 and programmed in kicks to complement the sub-bass. (Usually, I put a kick on each sub-bass hit, except for passing sub-bass notes.) In the second groove, I added hi-hats, which I didn’t include previously. I have a love-hate relationship with hi-hats, which originates from the fact that it’s very hard for me to make them sound convincing. In this part of the track, I also reversed some open hats, hard-panned them, and added reverb to play off of the synth lead.

Drum Groove 2

In terms of the melodic elements, I didn’t alter my sub-bass, piano, or bell samples too much, other than chopping them here and there. I learned this trick by listening to other artists –there’s something about actually chopping up the rendered audio, rather than just repeating the notes in midi, which I like. (See Below)

Example of Chopped Melody

For the bass in the first half and outro of the track, I took a sample of a bass guitar playing a low C and arranged and pitched shifted the sample to make an artificial bass line. Although I used a lot of audio effects on the melodic aspects of the project, 90% of them were EQ, and compressors (I tried side-chaining). That being said, I did end up experimenting with some really cool waveform plug-ins, such as “Melt”, a modulation effect that seems to have made my piano riff a lot muddier (in a good way), as well as “Bit Glitter,” which I discuss later on. 

Melt Plugin Interface

I also did A LOT of automation. Most of the automation was volume control (fading clips in and out), panning, or reverb wetness levels. During the post-drop section, however, I created an artificial tremolo by automating the volume on the piano. In addition, I automated a bit-crusher in the outro (which I discuss later). 

Mixing Atmospheric Samples with Rhythmic Elements

I spent a lot of time messing around with atmospheric samples in this track — from what I can tell, when used tastefully, they can make any track a lot more interesting. In the first groove, for example, I added a random SSLIB recording of someone starting and driving a car to provide some background noise. I found that the percussive elements of the atmospheric sample complemented the rhythm of the track by adding unordinary ghost hits and mid-frequency warmth. I was actually quite happy with my background noise samples being a bit rhythmically out of sync with the beat at times, as they made my track sound more “real,” and less like a DAW project. I didn’t just use atmospheric samples for background noise. For the first time, I experimented with using foley/atmospheric samples in the build-up and drop, instead of using classic risers and impacts. For example, instead of using a crash cymbal on the drop, I used a combination of a scream sample, which I previously owned, and water droplet and door-shutting samples from the SSLIB. I really like the way it turned out, and I think I’m going to continue doing this going forward. 

Messy In A Good Way

In pursuit of developing a new sound, I tried to experiment with “imperfection” in various ways throughout the track. For me, the drum groove is the focal point of the first half of the project, but when I initially programmed in my drum rhythm, the gridded perfection of the beat felt a bit boring and didn’t capture the “loose” groove, which I was attempting to accomplish. One way I remedied this was by moving the claps forward, off-grid. It was interesting to see how I could artificially mimic the slight imperfections of a drummer, as well as to see how purposefully moving bits of the drum beats a bit off-grid animated the track.

Example of Claps Hitting “Early”

Grimy Tones:

It turns out that I LOVE bit-crushing. If you love distortion and haven’t checked out the waveform plug-in “Bit Glitter,” I strongly encourage you to do so. I used the plug-in a couple of times — notably, on my melody during the drop, and once on my piano riff during the outro. In the first case, I wanted to add something to the melody to make it a bit more interesting and messy (for lack of a better term). Although I don’t fully understand what the “glitter” parameter of the plug-in specifically does, applying the effect made my lead fuzzy and obnoxious. The second case was similar in that I wanted to make the outro more interesting. This time, however, I tried something new. I discovered that upon adding Bitcrusher to the piano sample, a new tone or harmonic element was introduced. I experimented with automating the Bitcrusher to create a sub-melody out of the resulting tone. 

Bit-Glitter Interface

It was really fun learning to navigate Waveform, and I look forward to the next project!

Recording with the iPhone 11

The iPhone 11 that I used for this experiment has three stereo microphones — one for phone calls (bottom), one for videos (next to the camera), and one for Siri (next to the ear speaker). Overall, reviews for the recording capabilities of the iPhone 11 are decent. The phone seems to have a pretty flat frequency response curve, excluding very high ranges. Like most devices, midrange resonances can mess up the recording quite easily, however. To my knowledge, these microphones can record at a maximum sample rate 48kz at 32-bit depth and have both mono and stereo recording abilities. For a more in-depth review of the iPhone 11 and its cousins, follow this link: https://www.dxomark.com/apple-iphone-11-audio-review/#:~:text=Like%20the%2011%20Pro%20Max,of%20bass%20impaired%20the%20sound.

The Application:

My search for a “lossless” recording application that also allowed the disabling of auto gain balance was not easy. Previously, I had regularly used Voice Memos for recordings, because I wasn’t too concerned with the audio quality at the time. Voice Memos automatically compresses audio recordings for better storage, but by navigating to the audio quality settings in General settings, one can switch on “lossless.” That being said, Memos records in mono, and there is no convenient way to regulate the Auto Gain Control. Other applications, such as Dolby On, allow lossless, uncompressed, 16-bit recordings, but seem to have various gain balancing and tone balancing technologies. I finally chose Auphonic, an application that allows users to select between the three iPhone microphones, as well as select the sample rate, bit depth, and input gain. By default, Auphonic rids all IOS preprocessing.

Recording:

I did all my recordings with the bottom microphone at a sample rate of 48khz with 24-bit (the max) precision. I used my computer speakers to project both the sine and white noise files.

My experiment included six recordings total — three for sine sweep and three for white noise. Throughout my six trials, I experimented with both room size and microphone distance from the output source. For the first two recordings of each sound, I recorded in my room at two distances — one foot away from the source and four feet away from the source. The final trial for each sound is recorded in my common room, when the microphone is 15 ft away.

FREQUENCY RESPONSE GRAPHS:

Sine Sweep

WHITE NOISE

Results:

Although there are many differences between all three White Noise recordings, some patterns established themselves. For all three recordings, The microphone didn’t really pick up any frequencies between the 200-300Hz range. The extremity of this trough seems to have been exaggerated as I moved the microphone further away from the output source.  For all three recordings, there is also a trough at the 3000-6000Hz range and a peak at 7000Hz. One can also see that my iPhone won’t capture frequencies well past 20000Hz. The good news is that the dB difference between the loudest and softest frequencies remains about the same.

There are also some recognizable patterns among the three sine wave recordings. Like the white noise recordings, there is a loss of frequencies in the 200-300hz range. There is another trough in the 700ishHz range for all three and the peaks all occur at the same frequency. On top of that, the dB difference between the loudest and softest frequencies remained similar at different distances.

From my observations, I am convinced that my iPhone 11 has a fairly flat frequency response. I am somewhat convinced that although my microphone may exaggerate the lack of 200-300hz frequencies, the problem originates from my poor computer speakers, rather than from my iPhone mic. It is also generally the case that my frequency response curves were more flat the closer to the source I was.

 

Illiac Suite, Puckette, and IRCAM

To me, the two most interesting aspects of Illiac Suite, the first computer-generated composition for a group of instrumentalists, are the intricacies of the algorithm Hiller and Isaacson implemented (which we briefly touched on in class), and the philosophy behind the production of each of the composition’s four movements.

Illiac Suite is a product of an algorithm consisting of three overarching elements: initialization, generation, and verification. The first phase of the algorithm is concerned with establishing a set of rules, which primarily consists of axioms of traditional music theory and rules generated by analyzing other musical compositions. The second phase concerns itself with generating random data points for a set of parameters such as rhythm, pitch frequency, etc. Verification, the process of validating whether or not the generated data fit the initial rules, follows in what is a cyclical, self-verifying program (Di Nunzio).

The Illiac Suite is the amalgamation of four distinct experiments. In the first experiment, the rules of the algorithm are defined by traditional counterpoint, which includes three main categories of rules: melodic, harmonic, and mixed rules, which did not fit into the other two categories. The first experiment has three sections — in the first, there exists one voice, in the second, two, and in the third, four.

The second experiment is quite similar to the first, following strict laws of counterpoint, but consists of a greater variety of rules derived from western music theory. Additionally, instead of three sections, it is just one.

Among various other differences, tempo and dynamics, which had previously been predetermined by the programmer, are the focus of the third experiment.

The final experiment implements a stochastic process, in which the probability of certain outcomes is influenced by previous outcomes (Blomberg). This experiment is a complete departure from the previous three, for its rules revolve around non-musical principles.

Although rule-based composition has since been improved, Illiac Suite set an impressive precedent.

PUCKETTE

Miller Puckette is an American computer programmer, best known for designing Max and PureData. Puckette developed the software Max, a visual programming language for music production, in 1988, while working at IRCAM. Interestingly enough, since Puckette developed the Max software under an IRCAM license, he didn’t actually have agency over the program. In turn, he developed Pure Data, a program improving upon Max.

Miller Puckette (US) | iX Symposium
Puckette on PureData

Though similar, Pure Data is different from Max in myriad ways, and is generally considered a faster and more stable version of the latter. (I know what you’re thinking, “PD isn’t integrated into Ableton Live, so who cares?”) In Puckette’s 1996 paper, “Pure Data: another integrated computer music environment,” he outlines some of the problems that he attempted to solve through Pure Data: “the most important weakness of Max is the difficulty of maintaining compound data structures of the type that might arise when analyzing and resynthesizing sounds or when recording and modifying sequences of events of many different types…Pd’s [PureData] working prototype attempts to simplify the data structures in Max to make them more readily combined into novel user-defined data structures.” (Puckette) Since 1996, Pure Data has made tremendous strides, and Puckette is still very active in the community — musical souls don’t grow old.

IRCAM

IRCAM, Institut de Recherche et Coordination Acoustique/Musique, where Puckette wrote Max, was introduced in the 1970s as an institution focused on the production and research of music under the Centre Pompidou, a cultural center in Paris. Originally, prominent french composer and conductor, Pierre Boulez was invited by President Pompidou to lead the institute. At its founding, the institute played the role of supporting classical musicians to realize their abstract visions with the use of developing computer technologies. Oftentimes, however, the creativity of the composers informed the technicians by providing ideas for new technologies. This mutual symbiosis (which we also touched on in class) was much of the appeal of institutions such as IRCAM. 

L'Ircam | IRCAM
IRCAM in Centre Pompidou

According to a 2008 NPR segment on IRCAM, the goal of the institution is very similar today: IRCAM works with composers to realize their ideas through technology. Lately, IRCAM has also independently worked with a variety of organizations. On their website, you can read about the many projects with which they’re currently involved. Some of the projects that stood out to me are Chanter: real-time controlled digital singing, and Animaglott, which is focused on reproducing the different forms of human speech, i.e, whispering, talking, and shouting. They’re not just working on voice-related synthesizing technology, however. Gesture analysis and recognition is a current project focused on building interactive musical systems one can control with gestures. For more information on IRCAM’s current projects, follow the link: https://www.ircam.fr/recherche/projets/

Works Consulted and Cited:

Blomberg, C. (2008). Stochastic Processes. In Physics of life: The physicist’s road to biology. Amsterdam: Elsevier.

Di Nunzio, (n.d.). A. Illiac Suite. Retrieved September 08, 2020, from http://www.musicainformatica.org/topics/illiac-suite.php

Di Nunzio, A. (n.d.). Miller Puckette. Retrieved September 08, 2020, from https://www.musicainformatica.org/topics/miller-puckette.php

Hiller, L. (2018, February 21). Computer music. Retrieved September 08, 2020, from https://www.britannica.com/art/electronic-music/Computer-music

IRCAM: The Quiet House Of Sound. Weekend Edition Sunday. National Public Radio. (2008, November 16). https://www.npr.org/templates/story/story.php?storyId=97002999. Retrieved September 08, 2020

IRCAM. (n.d.). Retrieved September 08, 2020, from https://www.ircam.fr/

Mandelbaum, R. F. (2017, February 21). Miller Puckette: The Man Behind the Max and Pd Languages (and a Lot of Crazy Music). Retrieved September 08, 2020, from https://spectrum.ieee.org/geek-life/profiles/miller-puckette-the-man-behind-the-max-and-pd-languages-and-a-lot-of-crazy-music

Media Art Net. (2020, September 08). Media Art Net: Hiller, Lejaren; Isaacson, Leonard: Illiac Suite: Quartet No. 4 for strings. Retrieved September 08, 2020, from http://www.medienkunstnetz.de/works/illiac-suite/

Miller Puckette. (n.d.). Retrieved September 08, 2020, from https://music-cms.ucsd.edu/people/faculty/regular_faculty/miller-puckette/index.html

Puckette, M. (n.d.). Pure Data: Another Integrated Computer Music Environment. 

Kozinn, A. (1994, February 01). Lejaren Hiller, 69, First Composer To Write Music With a Computer. Retrieved September 08, 2020, from https://www.nytimes.com/1994/02/01/obituaries/lejaren-hiller-69-first-composer-to-write-music-with-a-computer.html