Final Project: Greensleeves with 8-part harmony, epic organ, and bonafide church bells??

ChristmasEve1878.jpg

“Christmas Eve” painting by J. Hoover & Son, 1878

This project began, like all my projects, with a random voice memo. In winter 2019, I was playing the tune of “Greensleeves” on the piano with some minor 13th chords. It’s hard for me to articulate, but the recording (and the “Greensleeves” folk song in general, which was written anonymously around 1580) inhabits an enchanted liminal space between wintery, nighttime, enchanted vibes and a classic, elegant, timeless English feel. The sense of timelessness and spaciousness really gets me.

Voice memo:

I thought of doing “Greensleeves” because my bigger goal with this project was to create an arrangement of a *classic song* (classic in this case just meaning old and well-known) in Waveform. I gave myself the following guidelines:

  1. includes me singing complex stacks of harmony like Jacob Collier, but obviously in my own style
  2. pushes me to craft my own synths in Subtractive and 4OSC
  3. pushes me to avoid existing samples and record found objects, manipulating unconventional sounds to create beats, synths, etc

Once I decided on “Greensleeves” tune, I set my priorities as (1) first, (2) second, and (3) third most important. This was going to be 1) an a cappella arrangement, 2) undergirded by churchy instrumental sounds, and 3) supported by samples, with vocal recording obviously at the center.

Then I had to decide which lyrics to use. I wasn’t going for medieval love song vibes, and the original words of “Greensleeves” are kinda weird. There was a surprising amount of alternative lyrics. I really wanted to go for something universal/secular-ish, but “The bells, the bells, from the steeple above, / Telling us it’s the season of love” just wasn’t cutting it (sorry, Frank Sinatra). I settled on the original Christian adaptation “What Child Is This?” by William Chatterton Dix (1865). (I opted to change “Good Christians, fear, for sinners here / The silent Word is pleading” to the alternate “The end of fear for all who hear / The silent Word is speaking.”)

For me the Christian Christmas story definitely inhabits that mysterious, shadowy, timeless feeling that I was talking about earlier (the wise men following the star, the shepherds out at night). I liked imagining reverbed, dark organ and choir sounds fitting into that space.

Above all, I love using harmony to color things. I sang the vocally-harmonized equivalent of my voice memo above. The Waveform part of this was easy: stack a bunch of audio tracks.

Unmixed voice sketch of piano improv:

But the musical part was hard. I was so focused on intonation, rhythm, and line that first of all I sacrificed the synchronization of consonants (even in the final recording they’re often out of sync) and second, I couldn’t improvise. I don’t have enough vocal skill yet for that kind of fluidity or flexibility. Improvisation and imprecision did become an important generative tool, however, in the routine that I fell into for each verse and chorus:

  1. Brainstorm a list of adjectives to describe the feel of this verse and its dramatic role in the song.
  2. Improvise a harmonization on the piano over and over until I like it and put the midi in Waveform
  3. Quickly sing lines over the harmony that feel natural (e.g., change the chord inversions to create a smoother bassline, since the bottom notes on the piano version probably jumped all over the place)
  4. Change my piano midi accordingly, and take a screenshot of the piano roll as a memory aid for the contour of each vocal part.
  5. Sing vocal parts reasonably well. This was actually the easiest part. Confidence is key. I’m not an amazing singer by any means, but everything sounds at least decent with some brave intentionality. Also, I found that doubling or tripling the bass and melody compensated for minor issues with tuning, breath control, or volume.

Vox:

Piano roll (hard to read, but a good memory aid. Notice that the lines in this one are NOT singable yet lol.. like look at the top line):

Meanwhile, I explored samples and synth sounds that I could use. I tried to record my keychain to get a sprinkley/shimmery sound, but after twiddling for an hour with delay effects and feedback I just couldn’t get the sound I wanted, and used a preexisting sample. Other attempts were more successful: I tapped my stainless steel water bottle with my finger, producing a warm tenor-range tone reminiscent of bells.

I added light reverb, high-shelf-EQ’ed out the high frequencies to reduce the potential for dissonance, and automated pitch-shifting to tune this with the wordless “doodah” thing you heard me sing above (fun fact, my water bottle is exactly a quarter-tone sharp from 440Hz tuning). This effect was very spooky and breathy and therefore I made this section my refrain.

Those bell harmonics were SO COOL, dude (listen close: it’s a major ninth chord in root position, I kid you not).

Original sample:

Pitched, reverbed:

Then there was the organ. Ahhhhh, the organ. I tried to make it myself, I researched crazy additive synthesis stuff with sawtooth waves, and then I discovered that 4OSC had an organ patch that blew my attempts out of the water. Its Achilles’ heel, however, was so darn annoying. The organ patch emanated a high-frequency (2-10kHz) crackle that MOVED AROUND depending on what notes you were playing and got really bad when I played huge organ polychords (which, if you haven’t noticed, is kinda my musical M.O.). My best solution was to automate notch filters that attacked the crackle optimally for each chunk of music. I spent a lot of time deep inside that organ dragging automation lines controlling the cutoff to the elusive median frequency that best subdued the dEmoNic crAcKLe (band name list, Mark?). It also helped in certain sections to automate the Q and gain of various notches and curves, not just for the crackle but also for the overall brightness/darkness that I wanted.

The original organ sound on the final chorus, without EQ (or light reverb/compression):

EQ automation:

Two more very interesting sounds I found. Firstly, in Subtractive, there’s a patch called “Heaven Pipe Organ” that sounds very little like a pipe organ but clearly had the potential essence of a spooky, ghostly, high-frequency vibe to add to my vocals. There was even this weird artifact created by a LFO attached to pitch modulation that caused random flurries of lower-frequency notes to beep and boop. I mostly got rid of it but not entirely, because I was almost going for wind-like chaos. The main things I did were:

  1. Gentle compression
  2. Reverb with bigger size (to stay back in the mix) but quicker decay (I didn’t want the harmonic equivalent of mud 🤷🏼‍♂️)
  3. Quicker attack on the envelopes (I wanted it to whoosh in, but not like 20 seconds behind the vocal because that would be dumb)
  4. EQ. I wanted those high freqs to really shine: 

Honestly, that wasn’t a whole lot of change to produce a dramatically altered sound.

“Heaven Pipe Organ” before:

After:

Finally, bells. Christmas = bells. You might recall my post called “Boys ‘n’ Bells” (or something, idr) about Jonathan Harvey’s Mortuos Plango, Vivos Voco. I was seriously inspired by the way he has a million dope bell sounds surround you octophonically. I couldn’t quite do that, but I found some wonderful church bell samples from Konstanz, Germany, (credit: ChurchBellKonstanz.wav by edsward) that I hard-panned left and right during the epic final chorus and mainly tuned to the bassline:

I also found a nice mini bells sound (the 4OSC “Warm bells” patch), which had good bell harmonics preloaded and just needed slight EQ to reduce some nasty 10kHz buzz. I had it accompany the melody of the final chorus, but my favorite use was a sort of “musical thread” that bounces back between a countermelody and a repeated note (I swear to gosh that there’s a specific term for this technique that I learned about in Counterpoint class… oh well):

As I started recording the verses and choruses and filled in the synths/samples, I started focusing more on the overall flow of the piece. Most of the work was cut out for me. There’s a musical antecedent/consequent (call/response) structure within each verse and chorus. This is clearly reflected in the melody and harmony. They start out feeling like a question and ends on a solid cadence. As the arranger, I played around a lot with this structure. Often I flipped the harmonic structure of verses so that they started out stable and then went on extended forays into instability/modulation in order to create a build into the next section. Here’s an example where the second chorus actually builds into the third verse, which surprises you because, well, you’ll see:

What you just heard is probably my favorite part. The second chorus is embarking on an elaborate quest to escape the key of f#minor. A dramatic descending circle of fifths modulation ends on a classic II-V that we have been *culturally indoctrinated* to expect to go to the I (in this case E major). Even going to the vi would be equally unsurprising (vi, C#min, is closely related to E major… this is a standard deceptive cadence). BUT NO. In a moment of serendipitous experimentation, I accidentally dragged an organ midi recording of Verse 3 right after the Chorus 2 modulation I described. The Verse 3 organ midi happened to be in B minor, and the transition sounded BEAUTIFUL. It has those haunting English vibes, and it works harmonically because the voice leading from B major (V in E) to B minor works even though functionally this is nonstandard.

Here’s what the transition from Chorus 2 to Verse 3 would have sounded like if it was predictable (Bmaj to C#min):

Here, again, is the colorful shift I chose from Bmaj to Bmin:

I’m pretty convinced about the second one., but please let me know which one you think is more dramatic in the comments!

The last element of this project I wanna talk a bit about is mixing. Mixing was hard. Because this is centrally an a cappella project, I put the voice in the center (spatially, spectrally, volume-wise). My first main focus was balance of the vocal stacks good. I spent hours arranging the tracks by color (red=melody, blue=bass, in-between = related/supporting parts), putting each separately-recorded chunk into folders within my vox submix, and getting the levels/panning right in Mixer:

For a while, I wanted the emphasize reverb in the vox, but inevitably that causes the vox to sit back in the mix. It needed to be front & center. So instead, I generally tried to compress the vocals for a close/direct sound and made a valiant effort to keep all the other sounds out of the way spectrally. But sometimes I liked the effect of blending a high-freq pad with the vox (see the Chorus 2 demo above) or giving a kick in the bass (see the use of bells). Speaking of bass: I showed my mix to my friend Liam and he suggested boosting the bass to make the vox richer. I did and it helped. Shoutout to Liam.

So yeah. I think that covers a lot of my decision making! Here is the final product. Enjoy, and HAPPY HOLIDAYS!!

Waveform 2: Sci-fi beat -> morbid falsetto -> ambitious crossover??

Looking for inspiration, I found this piano thing deep in the cobwebs of my voice memos. It felt very loopable and I liked both the rolling short-short-looong-looong-looong feel and the harmony (on a macro level I think you see a Bb dominant natural 11 chord “resolve” via voice leading to a Bbm9). So here’s the kernel (has some nice authentic background distractions):

Then I inputed the loop into Waveform using the Subtractive oscillator, and I really liked the Africa Horns (like, the Toto song) synth as a basic piano sound. I really liked the punch, but the pitch was a bit too grating and squishy, sorta. Lowering the frequency cutoff of the lowpass filter basically solved almost everything. To make it more piano-like, I also made the attack of the filter envelope agressive and shortened the amplitude release so that the sound didn’t accumulate and get soupy.

Africa Horns before and after (it’s subtle!):

I also added quick 808 drums (drum sampler) and bass (I chopped up and rearranged the Brass Brigade Sub Bass loop). Lastly, I added in some eerie quartal harmony in a pad that I made in Subtractive. I really liked the sound of a lead synth called “sub bass,” so I turned off the monophonic setting and made just a few adjustments (faster attack in general, and less detuning). Reverb and stereo widening especially helped the surreal effect.

At this point I figured out that we’re in 6/4 and we could repeat four cycles of the groove like so:

As you can hear, I also sped up the tempo at this point from 100 bpm to 140 bpm. Instead of feeling just sluggish, the groove suddenly felt both like it was plowing ahead and like it still had a heavy, behemoth quality. Especially once I added some dramatic strings, it became, as my floormates put it, “very sci-fi.”

Let’s talk about the strings. I wasn’t going for cliche or anything– not tryna make generic clip art–but it just needed some film-score dramatic strings. I settled Orchestra Loop #1 because the chords worked, and pitch-shifted it way down. I spent a lot of time *fiddling* with the string sounds in Elastique Pro, first of all, because the chord changes needed to align with the beat (I didn’t realize how hilariously out-of-sync it was until I showed my non-overthinking-musician friends, who noticed immediately). It was also frustratingly weird that the key you transposed the samples to didn’t match. It’s not that I have perfect pitch, which I only ever use as a party trick. It’s that, when you input a key that is higher, the program literally transposes the audio to a lower key. Same goes for bpm, in fact (higher bpm = slower speed). Ah well. Let me know if I’m missing something.

A nice and slow sound that is definitely not 400 bpm and definitely not in E.

I followed the exact same process with the high strings, splicing up, transposing, and time-stretching the “Nightingale Drive String Line” loop. This is where it became very movie score.

Meanwhile, I ordered the entrances of the players (piano, drum, bass and pad, strings). We started out with just the original piano groove, with some claps that fade in. My non-overthinking-musician suitemate thought that the claps were too confusing, and I agreed. Honestly, I feel like clapping on the 2 and 4 is way cooler, but here everything is just way too fast for it to lock in. Compare the offbeat claps:

With downbeat (what I went with):

For the drums I used a notch filter to reduce the annoying pitchiness of the ride cymbal. I relied on a low pass filter for now to make them tolerable before I found something better, and I really liked automating the frequency cutoff of the lowpass to rise so that the drums intensified until a mini drop, where I said “hey.”  For the “hey” vocal, I phased it, panned it hard-left, and added a wonky “energy” filter. Then I notched out a bunch of extraneous frequencies, creating some nice stalactites:

The drum build sounds like this:

I didn’t know where to go next so I asked myself, what’s the opposite of this? I listed adjectives like suspended, glassy, high-pitch, falsetto, human, hovering, slow, fragile. Subsequently, I had an existential moment during a sorta boring music theory lecture. I saw myself in my Zoom window appearing to be suspended in the sky and tree leaves that were above me. Feeling poetic, I imagined falling from the sky and not knowing who I was. Then I pulled out a rhyme dictionary and got this:

Falling from the sky

Who am I?

Gonna die, say goodbye

Who am I?

Time goes by.

Not exactly Robert Frost, but extremely laconic and morbid, which I liked because it felt so fragile and open-ended. I set the text to a melody and took like seven takes trying to ensure that my falsetto was 90% rather than 100% unbearable to listen to. Afterwards I automated pitch shift on the vox to fix some wonky notes (so much for perfect pitch lol).

I also made a pad sound to go along with the piano. Going for a warm but not too heavy sine sound, I made a sound in 4OSC with four sine waves with a steep (24dB slope) lowpass filter. I tuned one oscillator to an octave above pitch and another to 2 octaves above, to make sure the sound wasn’t too flat/low/heavy.

One of my favorite discoveries was adding in a mellow wineglass hit (with emphasized partials in the mids and reverb) when the first words enter, as a subtle bell tolling effect:

To transition to the second theme, I knew I wanted to drop out all the instruments in the initial groove until the high strings were suspended… suspensefully. Showing this to my non-overthinking-musician (are you sensing a theme??) suitemate, he came up an incredible transition into the second section. “The lyrics are ‘falling from the sky.’ What if you have the high strings fall down in pitch?” Genius. I added some beautiful wind chimes and wind from Freesound and got this:

After the second theme, I simply tried doing the reverse of this transition (the chimes, chords, and wind pitch-shifted upwards by a whole tone). This gave me the idea that it would be really cool to have the original theme enter back, in full force, in a new, brighter key (2 degrees sharper on the circle of fifths – think C modulating to D). I sent this reprised first theme to a bus with an automated Low Pass filter where the frequency cutoff gets higher (mirroring the original entrance, but this time with everybody involved).

At the end, I wanted the vocals from the second theme to join in with the first theme, with so much cathartic distortion that they felt almost electric-guitar like. I got a bit more of a “third grader trying to do mouth trumpet” sound, because I didn’t really understand what the distortion plugins were doing, but at least I tried.

Finally, I really enjoyed the end part, where I brought some of the sound effects (wine glass, chimes, nice chords from the 4OSC track) together on a V-I cadence that was both jumbled (lots of elements) and resolved (harmonically pleasing). It was actually really hard to keep them all in tune with each other, and I automated the pitch shifter at tiny microtonal levels on many many tracks throughout the piece!! I hope that that felt appropriately in spirit with a piece that basically puts two contrasting themes together and attempts an ambitious crossover of the two with a sorta gross-sounding screamy vocal.

Here’s the final product. Hope you enjoy!

(HW3) The hike: a track about nature, but it’s a 17/16 groove

It’s always interesting for me to reflect on my creative process and discover that my subconscious was working on something for weeks before deciding to fully reveal it. Let me start with an example! Now I know that the music I was making was inspired by fall all along, but I didn’t actually realize it until the very end, when I was searching for some ambient sounds (to fill the requirement). Almost automatically, I searched up water and leaf sounds. I quickly realized that I could bookend my off-center 17/16 groove with some leaf crunches and peaceful water whooshes. It just felt so right. 

Autumn Leaves Fall Around A Babbling Brook And Waterfall Stock Photo - Download Image Now - iStock
from iStock (https://www.istockphoto.com/photo/autumn-leaves-fall-around-a-babbling-brook-and-waterfall-gm1192602618-338903234)

So at the start, I played the water sound and the clip of someone walking on leaves at the same time. I panned the water to the right and automated the walking pan from left to right so that it sounded like you were walking towards the water (and automated the water volume to increase as you approached):

Spoiler alert: at the end of the piece, the opposite happens, so it’s like you’re walking away from the brook. (Although I think the water sorta overpowers the footsteps, in the future I would want to at least reduce the gain.)

I really liked the effect of introducing the groove modestly, with just the piano in the treble, as if it was emerging out of the leaf and water sounds. To enhance the “emerging” effect, I turned a nice chord from a somewhat out-of-tune Steinway upright (more on that later) into a reverby ambience that rises out from the rocks. I reversed the clip of the chord (removing the harsh attack at the beginning and creating a cool crescendo) and then automated a pitch shift from -12 semitones to original pitch. This ambient sound takes center stage at the same time as some aggressively reverbed piano arppegios.

I used different reverb plugins (just ones I had lying around) on two different recordings of the piano arpeggio (one with the mute pedal, one without). I played with the plugins, just twiddling until I liked it. I enjoyed the TSAR plugin’s dark effect. I ended up combining the two differently treated clips because it gave a prominent/full/surreal vibe that really stuck out later at the end of each groove cycle (4 repetitions of a 17/16 pattern… yes, I’m getting to that!).

Lighter use of TSAR reverb:

Gooey use of AUReverb:

I kept the water sound going in the background as a sort of sweep that crescendoed into the end of each groove cycle:

So you’re probably wondering about my groove. Yes, it’s in 17/16. I suppose I could have called it 17/8 (friendlier?) but what I really had in mind was that the clave sums to a 4/4 bar plus a single sixteenth beat. As for the actual grouping, it was 4+4+4+5 (sixteenths), which feels like shortish + shortish + shortish + a bit longer. On the 5-beat part, you get the effect of sort of rolling into the next cycle, which is why I emphasized this with the water sweep, as well as sustained notes in the piano and later, rhythmic intensity in the bass and drums.

Here’s the groove with just piano (try counting along):

And yes, this was just with acoustic piano. I got a reservation to a practice room in the Pauli Murray basement just in time, and I had an idea for a groove in 17/16 I wanted to try. To my initial chagrin, the piano was pretty badly out of tune, but I think it ended up adding character to something that was supposed to be wonky all along.

This is where I experimented most with mic placement. I had my dynamic cardioid Shure SM58 on a mic stand. It’s directional as heck, so you betcha I pointed that thing straight at the sound source.

You can also see my Scarlett Focusrite (NOT sponsored by Scott Petersen) on the nice carpet. Speaking of carpet, the soundproofing in here was pretty good (I think the panels on the wall are semi-soundproof). The best sound quality was pointing the mic as close to the source as possible, although there was a rattling on the right side of the piano so I only recorded from the left side. This meant that less of the rattly sound reached the mic, while minimal room noise was mixed in, either (because the room sound was very flat). I liked the flatness for this particular groove because I was going for something almost mechanical and as clear as possible.

As a cheap dynamic mic, the SM58 was great at capturing my mostly mid-range groove sounds. You’ll notice though that there’s a bassline in the clip above. That’s just EQ’ed piano. I know, right?? I EQ’ed the heck out of the piano, trying to compensate for the abysmal bass frequency response by increasing the gain near 100hZ for the bass. I also sorta aggressively minimized everything else. In the future, having learned about frequency filters in class, I would simply use a bandpass filter to isolate the bass area (much simpler)! I was so pleasantly surprised with how bassy the EQ made the piano sound. I think it’s largely because removing the high frequencies removes the very timbrally-characteristic piano attack sound.

I also used compression on the bass (I didn’t know what the parameters meant so I was going by ear) and sidechained it to the drums to create space for the kick (although I personally could only hear a subtle improvement).

Oh yeah, the drums. So I had this idea for a drum beat where I would take the most agonizingly typical 4/4 loop (the Sanfillippo 140 bpm pattern) and then add an extra 16th-note kick at the end to make it 17 sixteenths. (On the fourth cycle of 17/16, I added an extra cymbal hit to make it 17). I did this by rendering the loop and then splicing and reshuffling.

Can you see the extra lil’ boi on beat 17?

Transitioning into the beat was even more fun. In this section, see if you can hear how the water sweep crescendoes and then falls out for a split second, leaving space for the drums to bombard. Also, you can hear how the piano fill halts and lingers when the drums come in (I just time-stretched the chord it was playing).

After a two cycles of the full 17/16 groove with all the instruments, I wanted to spice it up by modulating randomly/wonkily. I put all the harmonic instruments into a folder (selected the tracks -> “Create folder track containing…”), and then put a pitch shift automation on the folder. That was fun. At the same time, I experimented with dropping out the drums at carefully placed moments, to make the listener’s ears perk up and be a bit less surprised by the crazy pitch changes.

Portrait of young man with shocked facial expression
https://www.shutterstock.com/search/wow+face

My favorite moment was when I tried dropping out the groove completely, just as there was a big build/pitch-shift/crescendo upwards! It resolves into the leaf crunches and water, as discussed earlier. I also held a piano chord over this, using the same technique of reversing it and drenching it in reverb (biggest room size!):

But this wasn’t enough. I had been building up to something inexorably with that aggressive 17/16 groove, and now I was taking the big risk of halting suddenly and going somewhere different (more internal/peaceful/introspective?). I thought, the human voice is very powerful. Like, a nice vocal harmony was probably powerful enough to match the groove. I choose a fat D chord with a bunch of tensions, including the 5, #5, 6 and 9. Including both the 5 and 6 spaced out, IMO, makes it hard to tell if it’s a (hopeful) D major chord or a (darker) B minor first-inversion, so I guess it’s in the middle??

I used my Shure SM58 (singing directly into it to give it a really close sound) and accidentally had my fan on behind me (but I ended up keeping it because it gave my voice the perfect airy quality). Autumn breeze?

I hope you enjoyed my self-analysis here. I’m so excited to check out the other projects! Here is the full mp3:

The Hike

iPhone SE (2020) Recording Quality

I’m a big fan of the 2020 iPhone SE. It’s $399 and has pretty much anything a cell phone normie like myself could want. BUT: Do its audio recording capabilities stack up?

The specs

The SE has three mics – one at the top by the front-facing camera, and two at the bottom where the speakers are. The two mics at the bottom allow for stereo recording.

As it seems is the case with most iPhones, the SE’s sample rate is 48kHz and bit depth is 16 bit. I confirmed this by trying various sample rates and bit depths in the TwistWave Recorder app. The app said that the device didn’t support recording above 48 kHz, and although TwistWave supported 32-bit processing and export it didn’t indicate the possibility of 32-bit recording. I couldn’t find a specs list or website to confirm this, but it’s certain that 48 kHz and 16 bit are supported by the hardware.

Recording app

I’m using TwistWave Recorder (shout-out to Michael Lee for the rec!). There are options for lossless audio, disabling Audio Gain Control (see the “Enable iOS processing” toggle), and controlling bit depth and sample rate:

Testing the frequency response

In Audacity, I generated 15 seconds of a 100Hz to 18kHz sine sweep and 15 seconds of white noise. I played these from my computer speakers at max volume and recorded them on TwistWave Recorder on my phone, angling the dual mics at the bottom of the phone directly towards and in front of the computer speakers.

In Ocenaudio, I generated FFT analyses of the original and the recorded sine sweep and white noise samples. The FFT analysis shows us the dB FS of the spectra of the samples and the recordings, which we can interpret to evaluate the spectral flatness of the recordings. We’re looking to judge whether the iPhone mics have a flat response (the mic accurately reproduces the original sound, with even sensitivity to different frequency ranges) or a shaped response (the mic is more sensitive to some frequency ranges than others – ranges of higher dB FS indicate higher sensitivity). See this Shure webpage for more detailed info.

Compare the graphs and sound files for the sine sweep…

FFT analysis of the original sweep tone

FFT analysis of the recorded sweep tone

 

…And for the white noise…

FFT analysis of the original white noise

 

FFT analysis of the recorded white noise

 

 

 

 

 

 

 

Both of the original test audio files have pretty flat graphs, with a significant drop a bit below 20kHz (roughly the upper limit of human hearing). This drop is also reflected in the graphs of the iPhone recordings.

Qualitatively, it’s easy to see that the iPhone has a pretty bumpy frequency response, with a weaker sensitivity at the lowest range (0-100hZ), a peak around 1500hZ, a valley around 5000hZ, a peak around 7500hZ, another peak around 12000-15000hZ, a weaker response from 15000-17000hZ, and then the drop from 17000-20000hZ as mentioned above. This holds true for both the white noise and sine sweep recordings.

This makes a lot of sense when you consider the practical purpose of an iPhone microphone: transmitting the human voice at GREAT quality. The weakness at the lowest and highest ranges of human hearing make sense – those extreme ranges are not important in the vocal spectra and can’t be heard very well anyway.

1500hZ, which I mentioned has a peak on the frequency response, is pretty important for decoding the human voice. Vowels fall in 200-2000hZ and consonants in 2000-8000hZ:

Those higher frequencies around 12000-17000hZ are also pretty well taken care of by the iPhone, which is good for enabling a crisp/clear/punchy sound.

So we can’t say the SE mics have a totally flat response curve, but they’re darned good at capturing the human voice. And it’s a relatively cheap phone, and cheap is cool.

Boys ‘n’ Bells: IRCAM, Jonathan Harvey, and “Mortuos Plango, Vivo Vocos”

“ThE comPuteR cAn dO beTter thAn tHis!”

Thus spake Bell Labs scientist John Pierce and “father of computer music” Max Mathews after a piano concert, feeling rather lofty and ambitious — not to mention disdainful of the centuries-old Western Art Music tradition (“INART 55 IRCAM”). But these quirky scientists were on a mission to make computers extend and exceed human musical capabilities.

Max Mathews, from https://upload.wikimedia.org/wikipedia/commons/thumb/1/1c/Mathews260.jpg/220px-Mathews260.jpg

The pioneering research on sound, music, and computers in the 20th century embodied the spirit of bringing “traditional” human and acoustic into conversation with the limitless capabilities of computers. Composers and musicians collaborated with technical experts at institutions such as IRCAM in France. One such composer, Jonathan Harvey, composed “Mortuos Plango, Vivos Voco,” a haunting tour de force that melds the sounds of a boy’s voice and the bells at Winchester Cathedral (“Mortuos Plango, Vivos Voco by Jonathan Harvey”). Despite IRCAM’s reputation as “an esoteric research programme,” the piece was hailed as an effort of IRCAM that actually yielded “music capable of appealing to a wider audience” (Downes 22). And to bring things full circle, the composition was coded in MUSIC V, an innovation of the notorious aforementioned Max Mathews (1926-), as well as CHANT, an invention of IRCAM.

IRCAM: Institute for Musical and Acoustic Research and Coordination

IRCAM, from https://www.ircam.fr/static/src/assets/img/ircam_card_facebook.jpg

In 1970, French president Georges Pompidou invited eminent French conductor and composer Pierre Boulez to head IRCAM, a brand new institute for musical research and creation in three subterranean floors of the Centre Pompidou in Paris (“WWW Ircam: History”). Deep down in the steel building, technicians, designers, and composers labor at their computers in acoustic caves in gray corridors (NPR.org).

Since its opening in 1977, ICRAM has hosted composers such as John Cage, Karlheinz Stockhausen, Terry Riley have worked there (NPR.org) and resident groups such as Ensemble Intercontemporain (Ensemble intercontemporain). 

Boulez, long interested in electronic music, worked at IRCAM to produce a piece that featured real-time interaction between musicians and computers resulting in a coherent and unified sound. His piece “Répons” (1981), a breakthrough in real-time digital audio processing, fed the sounds produced by six soloists spaced around the concert hall into a computer that recombined them with the sound of the group of 21 musicians on stage. It used IRCAM physicist Giuseppe Di Guigno’s 4X synthesizer, “abstracted the idea of oscillators and interconnection to objects and algorithms that could be linked” and was used as a universal machine for signal processing.

The machine room (1989), from https://upload.wikimedia.org/wikipedia/commons/thumb/b/b3/IRCAM_machine_room_in_1989.jpg/220px-IRCAM_machine_room_in_1989.jpg

IRCAM researchers created several software programs, including Modalys, for synthesis via physical modeling; Max, for real-time processing of interactions between computer and performer; Spatializater, used for concert hall acoustics, and OpenMusic, a visual programming software program significant in computer-assisted composition (“WWW Ircam: History”); and CHANT, which simulated the vocal tract to synthesize the human voice, based on the formant frequencies of vocalists and extremely intensive computations (“INART 55 IRCAM”).

IRCAM is still going strong. IRCAM sound engineer Olivier Warusfel lists two projects pursued in the 2010’s: augmented instruments that transform the sound of live human playing in real time, and wave field synthesis (WFS), which uses carefully placed loudspeakers to remedy the problem of too-quiet “dead spots” in concert halls (NPR.org).

Jonathan Harvey: the man

Jonathan Harvey, the man himself. From BBC https://www.bbc.co.uk/staticarchive/b26315ce71a298d90bf316d9e0538045ff485b2c.jpg

Jonathan Harvey (1939-2012) was a frequent IRCAM collaborator. He largely shared Boulez’s belief that musical culture had become a conservative “museum” culture in desperate need of development beyond the instruments of the late nineteenth century. Harvey, an Englishman, contrasted the tepid pursuit of electroacoustic music in the UK with the “overdue liberation” he found in Boulez’s government-sanctioned institution (Downes 21).

Perhaps the appeal of Harvey’s computer music lay in his spiritual and humanistic inclinations. He grew up a chorister at St. Michael’s College, Tenbury, where he “came to love the Anglican liturgy and its musical tradition,” and he later found through his reading and meditation on Hindu and Buddhist sacred texts that “ancient prayers and visions [were] completely consonant with electric sound.” He drew inspiration from Britten, Schoenberg, Messiaen, and Stockhausen, and spent an academic year at Princeton developing a notion of harmony and modality that evoked unique and non-Western atmospheres. His revelation as a composer came from working at IRCAM and delving into spectral music and computer techniques (Griffiths).

The bell ringers’ chamber with 14 bells, tower tour, Winchester Cathedral

The winchester cathedral bells, from https://www.flickr.com/photos/hilofoz/6281233835/lightbox/

Mortuos Plango, Vivos Voco: boys ‘n’ bells

Harvey’s masterwork, “Mortuos Plango, Vivos Voco” (1980), features eight sections based on the eight lowest partials in the inharmonic series of the Winchester Cathedral Bells, which are developed and intermingled with the voice of Harvey’s son, a chorister at the cathedral. Chords were comprised of the thirty-three partials of the bells. Although the fundamental was C, the note F played prominently in the bells’ inharmonic series, creating an atypical and otherworldly sonority. Between sections, eerie glissandi transition one area of the spectrum to the next (Manning 200).

Jonathan Harvey – Mortuos Plango Vivos Voco (1980) from Andrey Smirnov on Vimeo.

Harvey describes the inspiration for the piece:

On this huge black bell is inscribed in beautiful lettering the following text: HORAS AVOLANTES NUMERO, MORTUOS PLANGO, VIVOS AD PRECES VOCO (I count the feeling hours, I lament the dead, I call the living to prayer). The bell counts time (each section has a differently pitched bell stroke at its beginning): it is itself a ‘dead’ sound for all its richess of sonority: the boy represents the living element. The bell surrounds the audience; they are, as it were, inside it: the boy ‘flies’ around like a free spirit. (“Mortuos Plango, Vivos Voco by Jonathan Harvey”)

Cover artwork of Schiller’s “Song of the Bell,” from https://upload.wikimedia.org/wikipedia/commons/thumb/6/69/Liezen_Prachteinband_Schillers_Glocke_01.jpg/338px-Liezen_Prachteinband_Schillers_Glocke_01.jpg

Aiming to enhance the effect of the deadness of the bells and the sprightliness of the boy, Harvey designed the work for an “ideal cube of eight channels” where the listener is immersed within the sound of the bell as the boy’s voice flies around (Emmerson 157-8). At IRCAM, Harvey’s recordings of the bells and of his son were manipulated and “cross-bred with synthetic manipulations of the same sounds.” This digital manipulation allowed for a shift between the bell spectrum and the boy’s voice, and for a harmonic structure based entirely on the bells’ inharmonic series (“Mortuos Plango, Vivos Voco by Jonathan Harvey”). This approach aligned with Harvey’s aesthetic desire to create an ambiguous musical nether zone: one fitting neither in live-player nor loudspeaker music, but merging the two (Downes 23).

Jonathan Harvey’s analysis of the bell spectra, from https://upload.wikimedia.org/wikipedia/commons/0/0a/Jonathan_Harvey_-_Winchester_Cathedral_bell_spectrum.png

I will close with Harvey’s inspiring reflection on the deeply humanistic potential of computer music. In short, the power of computer music is that it is both limited and enabled by the imaginations of us humans:

In entering the rather intimidating world of the machine I was determined not to produce a dehumanised work if I could help it, and so kept fairly close to the world of the original sounds. The territory that the new computer technology opens up is unprecedentedly vast: one is humbly aware that it will only be conquered by penetration of the human spirit, however beguiling the exhibits of technical wizardry; and that penetration will neither be rapid or easy. (“Mortuos Plango, Vivos Voco by Jonathan Harvey”)

 

Bibliography

Downes, Michael. Jonathan Harvey: Song Offerings and White as Jasmine. Ashgate Publishing, Ltd., 2009.

Griffiths, Paul. “Jonathan Harvey, Modernist Composer, Dies at 73.” The New York Times, December 6, 2012, sec. Arts. https://www.nytimes.com/2012/12/07/arts/music/jonathan-harvey-modernist-composer-is-dead-at-73.html.

Manning, Peter. Electronic and Computer Music. Oxford University Press, 2004.

Emmerson, Simon. Living Electronic Music. Ashgate Publishing, Ltd., 2007.

Ensemble intercontemporain. “A Soloists Ensemble.” Accessed September 11, 2020. https://www.ensembleintercontemporain.com/en/a-soloists-ensemble/.

“INART 55 IRCAM.” Accessed September 8, 2020. http://www.personal.psu.edu/meb26/INART55/IRCAM.html#.

NPR.org. “IRCAM: The Quiet House Of Sound.” Accessed September 8, 2020. https://www.npr.org/templates/story/story.php?storyId=97002999.

Vimeo. “Jonathan Harvey – Mortuos Plango Vivos Voco (1980).” Accessed September 11, 2020. https://vimeo.com/262625848.

“Mortuos Plango, Vivos Voco by Jonathan Harvey.” Accessed September 11, 2020. https://www.bbc.co.uk/radio3/cutandsplice/mortuos.shtml.

“WWW Ircam: History.” Accessed September 8, 2020. http://web4.ircam.fr/62.html?&L=1.

“WWW Ircam: Research.” Accessed September 8, 2020. http://web4.ircam.fr/recherche.html?L=1.