A Ribitting Final Project

The start of a fun project

For the last project of the year, I wanted to make a project that would be the culmination of all we have learned in the first-year seminar. I knew right away that I wanted to make a Waveform project because those are the projects I enjoyed the most making. I first needed to find a way to put together everything we’ve learned in order to make this project work. Recording and mixing would come naturally with the project, but incorporating SuperCollider less so.

I knew right away I wanted to make a very upbeat and fast moving music project, this needed to be the celebration of a great first-year seminar, not a sad goodbye. I needed this project to have more energy than the last two and provide some “good vibes” as one would say. Now I just needed to figure out how to put an abstract idea such as “good vibes” into notes, frequencies, sounds, etc…

 

Loops and Sample

I first needed to do a little bit of research on Spotify. I clicked on my happy songs playlist and narrowed down the “good vibes” into some tangible variables. I noticed that most songs have at least a simple catchy melody and some hard hitting drums (especially some good snares and hard hitting claps). I probably missed a lot of other key criteria but those two two things popped out the most to me. I started by looking through different synth noises in an attempt to find something I could use. I originally wanted to use a synth sound from a loop I found on Looperman.com, but ultimately decided that I would rather make my own. Most of the Subtractive synths didn’t feel like the right fit for the project I had in mind so I ended up making my own samples with a plugin synth from my music making software (another DAW). I made 4 different samples but used the same sets of chords, and just made some notes more staccato to add some punch. 

As usual, I decided to make my own drum beats and percussive sounds because I am not a big fan of drum samples. I like to make the drum loops and samples myself (weather that’s using MIDI or a beat sequencer) mostly because I give importance to the beat over the melody. I tried in this project to keep the idea of using some powerful snares and claps, so I ended up using fewer closed Hi-Hats. I ended up with 3 different snares that I switched around in the project depending on when the chorus or verses played. As per usual, the bulk of my drum work was in the kicks. I had a total of 4 different samples, all dominated in different frequencies so that putting them together in the chorus wouldn’t overwhelm the (30 to 100 Hz range). Although I didn’t have too many closed Hi-Hat samples, I did have a few open Hi-Hat samples (with only a few hits) as well as some rides that I may have overused at certain points, but I thought (and still think) it actually worked quite well.

The samples I ended up taking for my project were that of frogs croaking as well as a ribbit sound from the website orangefreesounds.com:

 

To have some fun, I decided to throw in a curve ball in the beginning of the project. I started the project very slow with what sounded like some sort of soundscape and then I interrupted it quickly to start the upbeat music I had prepared. In addition I did use a few sweeping samples I found on Looperman.com to lead into or out of the chorus in the beginning.

 

Recordings

There aren’t that many recordings in my project, but there are enough so that they cannot be ignored. I have no faith in my singing ability, so I did not create any long audio clips, but I had just listened to Kid Cudi’s new album and I was inspired by the ad libs in his song Sad People. Although it is quite ironic that I am taking inspiration from a sad song to make a happy one, I really wanted to have some fun making some adlibs in my project. I sat in front of my Iphone microphone saying random things and making noises, only to be displeased with almost everything. After a few more attempts I was done and had a few usable audio clips.

The first recording is that of me saying “nah…nah nah nah nah, we gotta change this up” which I used as the transition from my intro into the main part of my project:

 

I originally wanted to have me say “yeah” and then have the switch after that, but I thought it would be more entertaining to have me talk a little more in the beginning and use the “yeah” later. I think it made for a better transition overall, because I was able to have a moment where all the music stopped while I was talking. The second recording is of me saying “yeah” which I changed the pitch by 2 semitones to create a deep voice.

Because I was recording with an Iphone microphone there was a lot of background noise getting in the way. After doing some research online I found out that I could use audacity (which I luckily already had downloaded) to reduce the unwanted noise. This actually helped a lot in making my recordings smoother and sound a lot better. 

 

MIDI

I used Subtractive to create some sound effects in my project, and created two different synth sounds in the intro to give a strange effect. I again used Subtractive to make a bassline synth during the chorus, even though I don’t really like using subtractive as a plugin to make the melodies for my project. I always seem to use them more for interesting sound effects. One of the synths used was called Mordor 2 and I thought it fit perfectly in the strange intro sequence with all the croaking frogs as well as the weird bowed strings. The synth used for the baseline in the chorus was simply a soft background synth that complimented the chords of the melody. Originally I thought of adding an extra layer of synth in the higher frequencies (2k to 3k Hz range), but I ended up thinking it was too much and it ruined the overall feel of the song.  

 

SuperCollider

SuperCollider was the spark to the great idea of having a strange intro to the song. When I was looking through the different sounds and came across the class BrownNoise.ar (from superclass WhiteNoise.ar) which generates noise with a spectrum that falls off in power by 6 dB per Octave (example of this class being used):

Using that with OnePole.ar (example of this class being used):

 A low pass filter and a resonant high pass filter (example of this class being used with SinOsc):

I was able to obtain a sort of bubbling sound:

 

Plugins and Mixing

 

I didn’t go too crazy with the plugins and kept them quite simple. This is about the most I put on a single track:

I put a few extra plugins on my Subtractive Synths, however, I mostly kept it simple for the rest. I stuck to simple plugins such as EQ and Reverb. I added a little of Reverb to certain tracks but kept it to a minimum. I tried experimenting with reverb for this project, but I felt like it altered the choppy feel of the main synth at times and didn’t line up well with my recordings (that sounded better with low reverb). Because of this I mostly stuck to using a very low reverb:

As always, I used the EQ to fix some errors with the frequencies of certain samples and sounds. For example, I needed to cut some of the higher frequencies (80Hz to 100Hz) for one of the kicks so it wouldn’t overlap too much with another. I had to do some of the same changes for the snares as well. I did use the low pass and high pass filter plugins at certain points, but the EQ often helped enough. The one mistake I did make in the beginning, was not realizing one of the Reverb plugins I had in my project actually came from another DAW on my computer. Luckily I realized in time and I replaced it in no time.

 

Automation

There was more automation in my project this time than the last. I used panning for the intro as well as a little bit for the chorus, but it isn’t very strong. There is also some slight panning to the right when there is the “yeah” adlib, but it is the only recording I added any panning to. I did not use my signature ping pong effect panning. I mostly used automation to control the volume level. For example, the very sharp cutoff of the intro, or how the music fades out in the very end.

Errors and technical difficulties

This time I had far fewer technical difficulties than the last time. I only had one big one. My Waveform project crashed while I was working on it and when I reopened it half of the work I had done was gone. Some tracks were deleted, some automation, and a lot of plugins. I did save my work prior to the crash. I always get scared of things like this so I save my project after I make even the slightest change, but I think the problem had to do with my computer. My computer is old and it has almost ran out of storage so I believe it was a problem on my end and it had not much to do with Waveform itself. I was able to put everything back together, but it took a while and it was a pain. In the end not much else went wrong so it wasn’t terrible. My mac kept sending me this:

 

Project

 

“Time Floats By” – a piece about the passage of time

I decided to make my final project in Tracktion Waveform 11. My piece for my final project is titled “Time Floats By”!

I wanted my final project to be somewhat of a companion piece to my Homework 3 submission. While my Homework 3 submission, “Revolve Around You”, revolved around the theme of space, I wanted this piece to instead focus on time instead of space.

I wanted this piece to be indicative and evocative of the passing of time. So, I made my piece in 120 BPM (a multiple of 60 BPM)—with 2 beats being exactly one second—so I could use the ticking of a clock and the passing of seconds as the driving beat. I used a sound sample of the ticking and swinging of the pendulum of a large wall clock (like a grandfather clock) as a snare drum sound (https://freesound.org/people/straget/sounds/405423/), and used a sound sample of the ticking of a kitchen timer as a hi-hat (https://freesound.org/people/maphill/sounds/204103/).

My piece’s Waveform workspace layout—(only 13 tracks this time!)

In general, another major theme I tried to also convey with this piece is the subtle oddities in the way time flows (and how we perceive it). I had fun with slowing down time through the somewhat atonal clock-slowing sample “Time Slow Down” (https://freesound.org/people/PatrickLieberkind/sounds/392408/) and playing with the listener’s expectations while still keeping that ticking driving force. That sample, “Time Slow Down”, still had some pitches to it, so I used Pitch shifter automation (from –0 to –8.3 semitones) so the last “pitch” played lands on the tonic of the minor key of the piece.
I’m a big fan of odd time signatures—especially uses of odd time signatures in ways that are able to sound natural even to those not well-versed in odd time-signatures. So, while a majority of the piece is in 4/4, I wanted to have a bridge section in 7/8. …However, one of the skills I learned is it’s important when working in odd time signatures not to disorient the listener too much. Thus, so the transition to the bridge (both the different melody and the 7/8 time signature) isn’t super abrupt and jarring to the listener, I put the first two measures of the bridge melody into 4/4, before going into 7/8.

While the piece is in a minor key (G minor), I used the G major chord as an unexpected first chord of the piece and as a major part (pun intended) of the chord progression.
I used a Subtractive synth to create a reverse-reverb sounding pickup going into the first appearance of the synth melody before it comes in.

One of the things I wanted to work on with this piece was streamlining my process a lot more. I Something I struggled with on my previous two Waveform assignments that I wanted to accomplish here was not having more tracks than I needed to. On my Homework 3 submission—which I am proud of—I had a wild 82 tracks (actually, in the file there were 117 tracks, but 35 of them weren’t in use and only contain previous recordings I didn’t use), making for a 6-and-a-half minute piece overall. I did slightly better in accomplishing this on Homework 4 (which I’m even more proud of: a MIDI track inspired by the Black Lives Matter movement and racial equality protests of 2020).
Here, I wanted to stay more streamlined, and I committed myself to not going overboard this time! I took this as an opportunity to learn to cut stuff, instead of drowning an already good piece in filters and additional tracks of diminishing returns. I’m happy that I was able to overcome this, make a piece with only 13 tracks, show what I’ve learned, and still make a piece of music I’m really proud of!

Thank you so much for an incredible and unique first semester; it went so quickly (I guess you could say time has really flown by)!

Here’s my final piece, “Time Floats By”:

List of Tracks and their Different Instruments, Filters, and Plugins, etc.:

Track Instrument (4OSC / Subtractive) Filters, Plugins, Automation, etc.
“grandfather clock” — (sample)

“Wall Clock Ticking.wav” by straget on Freesound: https://freesound.org/people/straget/sounds/405423/

(with +Gain turned up)

  • it was incredibly quiet, so I turned up the Gain on that specific region
  • there was this semi-annoying  background hiss around 3100 Hz, so I used the AUGraphicEQ plugin to filter that frequency out
  • + delay (echoing exactly one clock hit—2 beats—later, to counteract the fact that every second clock tick in the sample is slightly quieter)
  • aux bus: (aux send)
“gfather clock SLOW” — (sample)

“Wall Clock Ticking.wav” but stretched to half-speed with Elastique (Pro)

  • it was incredibly quiet, so I turned up the Gain on that specific region
  • there was this semi-annoying background hiss around 3100 Hz, so I used the AUGraphicEQ plugin to filter that frequency out
  • + reverb
  • aux bus: (aux send)
“ticktickticktick” — (sample)

“Kitchen Timer” by maphill on Freesound: https://freesound.org/people/maphill/sounds/204103/

  • there was a bit of ringing and resonance at around 1400 Hz, so I used AUGraphicEQ to turn down the 1.2k Hz bar (but not the 1.6k Hz bar, as doing that also changed the shape of the sound)
“drums” Drum Sampler: a modified 808
  • + reverb
  • + lowpass filter
“melody 1” 4OSC Basic Lead 2
  • panned slightly Left
  • + reverb
  • + lowpass filter
“melody 2” a modified 4OSC Basic Lead 3
  • panned slightly Right (harmonizing the above in a different part of the soundspace)
  • + reverb
  • + lowpass filter
“Subtractive pickup” a modified Subtractive MINI BASS
  • + lowpass filter automation (sweeping from 231 Hz to 22,000 Hz upper limit)
“padsynth 1” a modified 4OSC Basic Poly
  • + reverb
  • + lowpass filter
“bassynth 1” 4OSC Pick Bass WMF, modified to be able to play multiple notes at once (“Poly” setting)
  • + reverb
  • + lowpass filter
“7/8bridge melody” a modified (differently) 4OSC Basic Poly
  • + reverb
  • + lowpass filter
“breakdown bass” a modified 4OSC Basic Bass
  • + delay
  • + reverb
  • + lowpass filter
“slowdown” — (sample)

the sample “Time Slow Down” by PatrickLieberkind on Freesound: https://freesound.org/people/PatrickLieberkind/sounds/392408/

added Pitch Shifter automation

  • + reverb
  • + Pitch shifter automation, from –0 to –8.3 semitones (so the last “pitch” this semi-atonal sample lands on is the tonic of the minor key of the piece)
“BUS: clock” aux bus of “grandfather clock” and “gfather block SLOW” tracks
  • aux bus: (aux return)
  • + reverb

CREATION and Process LOG:
A Mostly-complete Log of my general process:

Version 1:

  • exactly 120 BPM, so that each two beats are exactly a second (the passing of time)
  • in 4/4, in G minor. Might have a 7/8 bridge later on down the line
  • bass, melody1, and melody2 for now

Version 2:

  • made the bassynth’s instrument a modified “4OSC Pick Bass WMF”, fiddled with it
  • tried to change the resonance on melody1 and melody2 to make them sound different
  • modified melody1 and melody2’s instrument “4OSC Basic Lead 2” to take multiple notes at once (“Poly” setting)
  • harmonies leading up to m.41
  • idea for a section that feels like it’s moving in half-time at m.41 or m.49

Version 3:

  • creating a section that feels like it’s moving in half-time at m.41 or m.49
  • creating a 7/8 bridge

Version 4:

  • actually, changing that moment to technically being 7/4 so Waveform doesn’t double the speed of things copied
  • made the first quarter of the 7/8 part into regular 4/4 measures so the listener can adjust to the new section

Version 5:

  • some drums on the 7/8 part to help the listener keep the beat

Version 6:

  • adding an ending that goes back to 4/4
  • added the sample “Wall Clock Ticking.wav” by straget on Freesound: https://freesound.org/people/straget/sounds/405423/
    • it was incredibly quiet, so I turned up the gain on that specific region
    • there was this annoying background hiss around 3100 Hz, so I used the AUGraphicEQ plugin to filter that frequency out
  • also added the sample of “Kitchen Timer” by map hill on Freesound: https://freesound.org/people/maphill/sounds/204103/
    • there was a bit of ringing and resonance at around 1400 Hz, so I used AUGraphicEQ to turn down the 1.2k Hz bar (but not the 1.6k Hz bar, as doing that also changed the shape of the sound)

Version 7:

  • Added a half-speed version of the grandfather clock (with Elastique (Pro))
  • Added lowpass frequency automation 

Version 8:

  • Working on an ending

Version 9:

  • Working on an ending
  • And ending the whole thing with two measures of the clock ticking at the end (matching up with the beginning)

Version 10:

  • Removing blank tracks
  • Added a reverse-reverb-sounding Subtractive synth as a pickup to the first appearance of the main synth melody
  • Leading into the half-time breakdown, I added the sample “Time Slow Down” by PatrickLieberkind on Freesound: https://freesound.org/people/PatrickLieberkind/sounds/392408/
    • added Pitch Shifter automation

Version 11:

  • Fixes and filters!
  • Finished!

First-person shooter video game inspired sound effect pack

I’ve always been a gamer, starting with games on the Wii and moving on to mobile games and games like Minecraft. Over quarantine, I became obsessed with the newly released tactical shooter game Valorant, and so decided to make a sound effect pack for a first-person shooter video game.

Pistol:

The first sound I decided to make was the pistol, which is iconic since every person gets the pistol in the game. To create a pistol sound, I decided to model it off of a 9mm pistol: https://www.youtube.com/watch?v=HwxeLrWymrI. This sound was actually a hard one to make since it was the first one that I attempted. I wanted to have a “pow” sort of sound but didn’t know how to get that nice pop of the shot. So, I stuck a sample of a pistol shot into audacity and analyzed the frequencies and the different parts.

I noticed that there were three parts that made up the gunshot sound, which seemed to have different frequencies. So, I used a filter on white noise for each of the parts and combined them together. To get the parts to play in sequence, I used the fork operation. The first attempt didn’t sound very great, so I made a new version that used reverb and added a kick to the beginning of the shot to provided a sense of power.

 

Lasergun:

The lasergun sound was not hard to do. I knew that I wanted to make a “Peeew Peeew Peeew” sound, and that was clearly modulation with a decreasing frequency. So, I did frequency modulation by having a line envelope going from 1400 to 100 as the frequency. For the generator, I chose to use a Saw wave because that sounded the most electronic. In the end, my lasergun sound reminded me of the old Galaga game laser.

 

Footsteps:

Footsteps being an important part of any 3D game where people move in the world, I decided to create this sound next. This was something surprisingly hard to do. I initially wanted to create the stereotypical footsteps that you hear when you think of footsteps in a haunted house (here: https://youtu.be/9g7uukgq0Fc?t=40). However, that is something insanely hard to do since the noise isn’t regular. While trying to create that, I used brown noise with a lowpass filter along with a percussive envelope. However, that didn’t at all sound like what I was going for. Instead, it sounded like stepping in snow, so I fine-tuned it a bit and made snow footstep sounds first.

Snow:

*Disclaimer, I’ve lived in Southern California and don’t really step in snow, so if this sounds off ooPs

Next, since there are often desert environments in games, I worked on creating footsteps in sand based on this video (https://www.youtube.com/watch?v=Lg2zxeSF7WM). This is when I came to a key realization. Each footstep isn’t a single sound but has two distinct parts. The first sound is made when your heel hits the ground and makes a thump, and the second closely follows and is when the front part of your foot flaps and hits the ground (you can verify this, try walking). Using this new insight, I shaped white noise to first start off with a lower frequency percussive sound, and after that put a larger frequency range with a customized envelope to better imitate the fluidity of sand.

Finally, I decided to try and create the clean wooden floor sound that I was going for in the beginning. To do that, I took a similar approach as how I created the pistol sound, in which I listened to each sound separately and compared it to my own. My closest rendition of the footsteps sounded more like someone stomping on a wooden plank.

 

Jump:

Jumping is an important part of FPS games to get over different terrain and to dodge projectiles. The sound that I went for is different from the regular 8-bit jump that we had an example for in class. Instead, I went for a more realistic jump. There were two parts to this: jumping off of the ground, and landing back on the ground. While designing the lifting-off portion, I knew that the sound had to rapidly decrease in volume and also be quiet since there isn’t much movement against the ground when jumping (since it’s more about the leg coiling up which doesn’t make noise). Also, I applied a Low-pass filter to the noise so that it would decrease in the max frequency since less of the foot/shoe is in contact with the ground to make high pressure/frequency noise.

For the sound when landing on the ground, it was relatively straightforward, since it was just a thump sound, which I accomplished by having low frequency sounds shaped by a percussive envelope.

 

Shotgun:

The shotgun is a classic weapon in any game with guns, and I decided to make a shotgun with buckshot (since slugs aren’t very common in shooter games), and to do that, my goal was to somehow simulate all the pellets being shot out and flying through the air. To do that, I decided to have a lot of pistol shots played slightly delayed from each other. My rationale was that staggering the sounds would provide the punch feeling when shooting a shotgun. To play the sound staggered, I took inspiration from the “river” sound effect that we went over in class. Instead of using .collect though, I just used .do and played 6 pistol sound effects next to each other. Also, I moved the kick to the beginning outside of the .do loop since the shotgun should have one initial “thumpy” blast.

I have to warn you, the shotgun sound is a bit shocking and frightening when you first hear it, so be prepared. When I played this sound to my sister with headphones, you could see when the sound played from the way she jolted, but I mean if you are shot at by a shotgun in-game it is quite frightening.

 

Fire:

Fire is a common element of games since it directly represents damage. The first fire sound that I made is the typical one that will be used in games, with just the sound of moving air cause by the fire. I created it using Brown Noise as the base, and amplitude modulated it with LFNoise 1.

This first sound can be used for molotov cocktail burning, a flamethrower, or also maybe a wizard’s fiery hands.

The second fire sound that I created is the sound of a wooden fire, where wooden logs are burning. Even though this may be less important in a shooter game, I find it to be the sound effect I am the proudest of. For this sound, I took inspiration from the fire sound effect that we went over in class. I wasn’t really satisfied with it, since the popping and cracking sounds sounded really digital and dead. After listening to a lot of fireplace sounds on youtube, I classified the sounds into 4 parts: a drone, some sizzling, a crackle, and a popping sound. The drone is the wind sound, the sizzling is boiling moisture, the popping is like wood and twigs snapping, while the crackle is with a larger air bubble that bursts and makes a loud crack. The most difficult part of this sound was figuring out how to make the popping and the crackling be random. Taking inspiration from the fire sound effect we went over in class, I researched the Dust and EnvGen classes, which allow random triggers to be generated and be used to activate a percussive envelope. I set the rate of crackles to be about once every 5 seconds, and the rate of pops to be about 5 times every second to make a really active fire.

*looking back, the pops sound a bit too percussive…

 

Dash:

Dashing is one of the most common abilities in games, and my goal was to create a dashing sound highlighting the air moving while having a pitch shift to show speed (since there is the doppler effect). To do this, I layered three sounds on top of each other: an exclusively high pitched wind noise, an exclusively low pitched wind noise, and a frequency modulated sine wave with white noise multiplied by a decreasing line envelope as the parameter to have the “woosh” effect.

 

Sniper:

What’s a shooter game without a sniper rifle? I decided to model my sniper rifle sound based on this 50-caliber sniper rifle: https://www.youtube.com/watch?v=BB9Oqf2sBZ4. There’s a few parts to this sound: the boom, the chamber moving, and the bullet casing hitting the ground (which I added for fun). For the boom, I took the explosion example we went over in class and adjusted it a bit. For the chamber moving (the clanging sound), I used a bandpass filter on white noise to hone in on a metal resonant frequency. Finally, for the casing coming out, I used an echo effect on a sine oscillator to imitate the casing hitting the ground and bouncing. I also added a kick sound in the beginning to highlight the boom of the weapon.

 

Rifle:

For this sound, I wanted to imitate the sound of an AK-47 (https://www.youtube.com/watch?v=BU-r0ElUru8). The sound mechanic for this gun is similar to the sniper, so I just got rid of the shell drop sound, and adjusted the explosion to be way shorter. Additionally, I adjusted the tuned frequencies of the sounds to get it to sound right. I also reduced the kick sound since it isn’t as strong as a sniper’s. Also, actually after making this sound, I figured out how to use a for loop to play sounds, and applied it to the previous sounds to have them play automated.

 

Reload:

Finally, if there are guns, there is bound to be reloading. There are a few parts to the reload that I was able to figure out from this video: https://www.youtube.com/watch?v=oqFmQYNBwcw: clip out, mag in, then sliding the bolt to get another round in the chamber. When finishing this sound, it took a bit of tweaking the different sound combinations to get something convincing.

Reload:

 

Gunshots+reload:

 

Reflection:

After creating all these sounds for a shooter video game, the greatest takeaway is that synthesizing sound effects from scratch is quite tedious and frustrating at times. Some sound like a lasergun or wind can definitely be synthesized with no problem, they are simple by nature. However, for other sounds like footsteps or an explosion, because they are so complicated and intricate, it may be better to just record them or create them artificially through foley methods. Nonetheless, I am glad to have been able to synthesize the sounds that I did, and if suppose I ever make a first-person shooter game, I’ll already have most of the sounds I need.

Final Project: Greensleeves with 8-part harmony, epic organ, and bonafide church bells??

ChristmasEve1878.jpg

“Christmas Eve” painting by J. Hoover & Son, 1878

This project began, like all my projects, with a random voice memo. In winter 2019, I was playing the tune of “Greensleeves” on the piano with some minor 13th chords. It’s hard for me to articulate, but the recording (and the “Greensleeves” folk song in general, which was written anonymously around 1580) inhabits an enchanted liminal space between wintery, nighttime, enchanted vibes and a classic, elegant, timeless English feel. The sense of timelessness and spaciousness really gets me.

Voice memo:

I thought of doing “Greensleeves” because my bigger goal with this project was to create an arrangement of a *classic song* (classic in this case just meaning old and well-known) in Waveform. I gave myself the following guidelines:

  1. includes me singing complex stacks of harmony like Jacob Collier, but obviously in my own style
  2. pushes me to craft my own synths in Subtractive and 4OSC
  3. pushes me to avoid existing samples and record found objects, manipulating unconventional sounds to create beats, synths, etc

Once I decided on “Greensleeves” tune, I set my priorities as (1) first, (2) second, and (3) third most important. This was going to be 1) an a cappella arrangement, 2) undergirded by churchy instrumental sounds, and 3) supported by samples, with vocal recording obviously at the center.

Then I had to decide which lyrics to use. I wasn’t going for medieval love song vibes, and the original words of “Greensleeves” are kinda weird. There was a surprising amount of alternative lyrics. I really wanted to go for something universal/secular-ish, but “The bells, the bells, from the steeple above, / Telling us it’s the season of love” just wasn’t cutting it (sorry, Frank Sinatra). I settled on the original Christian adaptation “What Child Is This?” by William Chatterton Dix (1865). (I opted to change “Good Christians, fear, for sinners here / The silent Word is pleading” to the alternate “The end of fear for all who hear / The silent Word is speaking.”)

For me the Christian Christmas story definitely inhabits that mysterious, shadowy, timeless feeling that I was talking about earlier (the wise men following the star, the shepherds out at night). I liked imagining reverbed, dark organ and choir sounds fitting into that space.

Above all, I love using harmony to color things. I sang the vocally-harmonized equivalent of my voice memo above. The Waveform part of this was easy: stack a bunch of audio tracks.

Unmixed voice sketch of piano improv:

But the musical part was hard. I was so focused on intonation, rhythm, and line that first of all I sacrificed the synchronization of consonants (even in the final recording they’re often out of sync) and second, I couldn’t improvise. I don’t have enough vocal skill yet for that kind of fluidity or flexibility. Improvisation and imprecision did become an important generative tool, however, in the routine that I fell into for each verse and chorus:

  1. Brainstorm a list of adjectives to describe the feel of this verse and its dramatic role in the song.
  2. Improvise a harmonization on the piano over and over until I like it and put the midi in Waveform
  3. Quickly sing lines over the harmony that feel natural (e.g., change the chord inversions to create a smoother bassline, since the bottom notes on the piano version probably jumped all over the place)
  4. Change my piano midi accordingly, and take a screenshot of the piano roll as a memory aid for the contour of each vocal part.
  5. Sing vocal parts reasonably well. This was actually the easiest part. Confidence is key. I’m not an amazing singer by any means, but everything sounds at least decent with some brave intentionality. Also, I found that doubling or tripling the bass and melody compensated for minor issues with tuning, breath control, or volume.

Vox:

Piano roll (hard to read, but a good memory aid. Notice that the lines in this one are NOT singable yet lol.. like look at the top line):

Meanwhile, I explored samples and synth sounds that I could use. I tried to record my keychain to get a sprinkley/shimmery sound, but after twiddling for an hour with delay effects and feedback I just couldn’t get the sound I wanted, and used a preexisting sample. Other attempts were more successful: I tapped my stainless steel water bottle with my finger, producing a warm tenor-range tone reminiscent of bells.

I added light reverb, high-shelf-EQ’ed out the high frequencies to reduce the potential for dissonance, and automated pitch-shifting to tune this with the wordless “doodah” thing you heard me sing above (fun fact, my water bottle is exactly a quarter-tone sharp from 440Hz tuning). This effect was very spooky and breathy and therefore I made this section my refrain.

Those bell harmonics were SO COOL, dude (listen close: it’s a major ninth chord in root position, I kid you not).

Original sample:

Pitched, reverbed:

Then there was the organ. Ahhhhh, the organ. I tried to make it myself, I researched crazy additive synthesis stuff with sawtooth waves, and then I discovered that 4OSC had an organ patch that blew my attempts out of the water. Its Achilles’ heel, however, was so darn annoying. The organ patch emanated a high-frequency (2-10kHz) crackle that MOVED AROUND depending on what notes you were playing and got really bad when I played huge organ polychords (which, if you haven’t noticed, is kinda my musical M.O.). My best solution was to automate notch filters that attacked the crackle optimally for each chunk of music. I spent a lot of time deep inside that organ dragging automation lines controlling the cutoff to the elusive median frequency that best subdued the dEmoNic crAcKLe (band name list, Mark?). It also helped in certain sections to automate the Q and gain of various notches and curves, not just for the crackle but also for the overall brightness/darkness that I wanted.

The original organ sound on the final chorus, without EQ (or light reverb/compression):

EQ automation:

Two more very interesting sounds I found. Firstly, in Subtractive, there’s a patch called “Heaven Pipe Organ” that sounds very little like a pipe organ but clearly had the potential essence of a spooky, ghostly, high-frequency vibe to add to my vocals. There was even this weird artifact created by a LFO attached to pitch modulation that caused random flurries of lower-frequency notes to beep and boop. I mostly got rid of it but not entirely, because I was almost going for wind-like chaos. The main things I did were:

  1. Gentle compression
  2. Reverb with bigger size (to stay back in the mix) but quicker decay (I didn’t want the harmonic equivalent of mud 🤷🏼‍♂️)
  3. Quicker attack on the envelopes (I wanted it to whoosh in, but not like 20 seconds behind the vocal because that would be dumb)
  4. EQ. I wanted those high freqs to really shine: 

Honestly, that wasn’t a whole lot of change to produce a dramatically altered sound.

“Heaven Pipe Organ” before:

After:

Finally, bells. Christmas = bells. You might recall my post called “Boys ‘n’ Bells” (or something, idr) about Jonathan Harvey’s Mortuos Plango, Vivos Voco. I was seriously inspired by the way he has a million dope bell sounds surround you octophonically. I couldn’t quite do that, but I found some wonderful church bell samples from Konstanz, Germany, (credit: ChurchBellKonstanz.wav by edsward) that I hard-panned left and right during the epic final chorus and mainly tuned to the bassline:

I also found a nice mini bells sound (the 4OSC “Warm bells” patch), which had good bell harmonics preloaded and just needed slight EQ to reduce some nasty 10kHz buzz. I had it accompany the melody of the final chorus, but my favorite use was a sort of “musical thread” that bounces back between a countermelody and a repeated note (I swear to gosh that there’s a specific term for this technique that I learned about in Counterpoint class… oh well):

As I started recording the verses and choruses and filled in the synths/samples, I started focusing more on the overall flow of the piece. Most of the work was cut out for me. There’s a musical antecedent/consequent (call/response) structure within each verse and chorus. This is clearly reflected in the melody and harmony. They start out feeling like a question and ends on a solid cadence. As the arranger, I played around a lot with this structure. Often I flipped the harmonic structure of verses so that they started out stable and then went on extended forays into instability/modulation in order to create a build into the next section. Here’s an example where the second chorus actually builds into the third verse, which surprises you because, well, you’ll see:

What you just heard is probably my favorite part. The second chorus is embarking on an elaborate quest to escape the key of f#minor. A dramatic descending circle of fifths modulation ends on a classic II-V that we have been *culturally indoctrinated* to expect to go to the I (in this case E major). Even going to the vi would be equally unsurprising (vi, C#min, is closely related to E major… this is a standard deceptive cadence). BUT NO. In a moment of serendipitous experimentation, I accidentally dragged an organ midi recording of Verse 3 right after the Chorus 2 modulation I described. The Verse 3 organ midi happened to be in B minor, and the transition sounded BEAUTIFUL. It has those haunting English vibes, and it works harmonically because the voice leading from B major (V in E) to B minor works even though functionally this is nonstandard.

Here’s what the transition from Chorus 2 to Verse 3 would have sounded like if it was predictable (Bmaj to C#min):

Here, again, is the colorful shift I chose from Bmaj to Bmin:

I’m pretty convinced about the second one., but please let me know which one you think is more dramatic in the comments!

The last element of this project I wanna talk a bit about is mixing. Mixing was hard. Because this is centrally an a cappella project, I put the voice in the center (spatially, spectrally, volume-wise). My first main focus was balance of the vocal stacks good. I spent hours arranging the tracks by color (red=melody, blue=bass, in-between = related/supporting parts), putting each separately-recorded chunk into folders within my vox submix, and getting the levels/panning right in Mixer:

For a while, I wanted the emphasize reverb in the vox, but inevitably that causes the vox to sit back in the mix. It needed to be front & center. So instead, I generally tried to compress the vocals for a close/direct sound and made a valiant effort to keep all the other sounds out of the way spectrally. But sometimes I liked the effect of blending a high-freq pad with the vox (see the Chorus 2 demo above) or giving a kick in the bass (see the use of bells). Speaking of bass: I showed my mix to my friend Liam and he suggested boosting the bass to make the vox richer. I did and it helped. Shoutout to Liam.

So yeah. I think that covers a lot of my decision making! Here is the final product. Enjoy, and HAPPY HOLIDAYS!!

SuperCollider and Foley: Execute Order 035

Operation: Swordsmanship

As a movie buff, making movie SFX was the first lightbulb that ticked in my head when I saw “Final Project” pop up in the canvas assignments. My plan was to replicate the fight sounds from the iconic duel in Star Wars Episode V: The Empire Strikes Back (1980). Sound designer Ben Burtt originally used a projector motor hum with the buzz of an old television. Since my name is not Ben, nor is it Burtt, I do not have access to that retro equipment. I did not think it would be a good idea to devote my time trying to find electronic devices to replicate those two sounds. Instead, I dove into SuperCollider! Creating the buzz was actually pretty easy. I just generated a low-frequency sawtooth wave:

The hum required a little more experimenting. In a video by the channel ShanksFX (where he attempts to replicate the Star Wars SFX as well), he uses a square wave generator as a substitute. I tried a square wave in SuperCollider by way of a pulse wave, but I did not like the way it sounded. Instead, after searching around I used Impulse.ar as the hum:

Operation: Analog

After SuperCollider my original plan paralleled that of Burtt and Shanks. They played the hum and buzz together through a speaker. They then waved a shotgun mic in front of the speaker to create the famous doppler swing (vwoom vwoom).

Polar pattern of my Snowball iCE (top) and a standard shotgun mic (bottom)

As you can see a shotgun mic’s polar pattern is narrower, which is an advantage for movies and TV since it can isolate specific sounds like dialogue. Using my Blue Snowball iCE was a disadvantage. No matter how I moved it, the saber hum still sounded the same. The Snowball is just too good for lightsaber foley.

Though I did have a cheap shotgun mic (Rode VideoMicro), I discovered to my horror I had no adapter to plug it into my phone!!!! >:(

I considered recording the sound with the shotgun mic to a camera, yet I figured it would be quite a stretch to extract the audio from the video and retain my samples in pristine condition. I resorted to record with the TwistWave app on my iPhone 11 mic. Still horrible. You know the rumble striking your ears when watching a video from a windy day? The wind pickup was all I could hear, even with a gentle swish and flick across my speaker. I tried using different phones and different speakers to no avail. I had to shift gears. “Operation: Analog” was a failure.

Operation: Digital

Luckily, I did have a fallback plan. The doppler swing could be generated in SuperCollider. I already had the hum and buzz generated, so I just needed the full power to control the synth parameters in real time.

I remembered having too much fun with the cursor control (MouseX, MouseY) UGens, blowing my laptop speakers one too many times.

MouseX controls any parameter when the cursor moves left and right. The first assigned value within the UGen is attained when the cursor is at the left, and the second assigned one is attained at the right. MouseY is the same but for moving the cursor up and down.

For the hum synth, my MouseX controlled a multiplier from 1 to 1.1 for the frequency level. The frequency was also controlled with the linearly varying LFNoise1 UGen at a control rate of 1 from 90 to 91 Hertz. To create the sudden change in db levels for swings, I utilized the MouseY parameter on a multiplier for amplitude. Amplitude also had LFNoise1 but with a higher control rate of 19 to create a “flickering” effect. I added a MouseX control over a bandpass filter’s cutoff frequency to remove some of the “digitized nasal” sound.

In the original films I could hear subtle reverberations in the saber swings, so I added a MouseY to the wet mix level of a FreeVerb UGen.

For the buzz synth (sawtooth), I set a MouseX to control the cutoff frequency for a resonant lowpass filter, and a MouseY on an amplitude multiplier as well.

Video demo of mouse for doppler control

Operation: Sizzle

Even though I didn’t get to immerse myself in the hands-on process of the doppler swing, I was not 100% deprived of physical foley. While brainstorming, I was curious about how to recreate contact or clash sounds during lightsaber duels. Ben Burtt rarely speaks of this achievement in interviews. ShanksFX touches metal to dry ice to achieve contact sizzles, but I did not have ownership over dry ice of any sort. I actually spent an entire day trying to replicate the clash sounds in SuperCollider, which I would control with WhiteNoise and the MouseButton UGen. Thinking about stuff that “sizzles”, I considered placing water in a pan of hot oil. Due to fear of burning my house down, I instead used water with an empty, stove-heated pan. At first, I felt that the sizzles were not percussive enough, so I dumped water in the pan in one fell swoop:

Video demo of my foley

My methods were successful! I did keep some of the less percussive sounds for sustained contact between lightsabers. Other experiments with clashes include throwing an ice cube in the pan, and wiping the pan’s hot bottom over water droplets. By dumb luck I found two of my samples to be strikingly similar to the Jedi and Sith saber ignition sounds!

For retraction sounds I just reversed the ignition samples.

Next up, it was time to finally record my saber swings. I searched around and discovered that a general rule of thumb for recording most foley is with mono, so I recorded my saber sample wav files into one channel. I planned to record my samples in time with the finale duel between Luke Skywalker and Darth Vader in Episode V: The Empire Strikes Back (1980). After getting in the groove of keeping in sync with the film’s swings, I recorded my first SuperCollider sample, combining the buzz and hum.  Lord love a duck, it was no fun to listen to. When I was recording whilst watching the scene, I had a lot of fun. After solely listening to it, I found that referencing the movie was a not a good idea. Through the sample alone, I could hear myself constrained by the movie scene. The swinging I heard just felt awfully repetitive, restrained, and boring, and I spaced out halfway through listening. Not a good sign. I had to shift gears again!

Operation: Digital 2.0

In a quick brainstorming session, I tried to stage a fight scene of my own. The two fighters would start far apart (panned left & right), come to the center, and duel. I thought of what else would happen during a lightsaber duel. They would move around during the fight, then somebody might lose their lightsaber. I typed up shorthand “stage directions”  and I have the blocking of my scene here:

After staging the fight scene, I recorded a new sample whilst imagining my own staged scene. Listening to it I did get a little bored, but unleashing some more of my creative liberty gave me more optimism. After this successful (cough cough) recording, I recorded another hum/buzz sample to portray the opponent fighter. “Unfortunately, one week younger Michael Lee was not aware that he would have to shift gears yet again. When playing the two together, he heard the most dreadful thing in his ears.” Long story short, I could hear the phase cancellation! Both fighters shared the same humming frequencies. At some points my mouse wasn’t completely to the left of my screen, and that slight varying frequency made a big difference. The fighters’ hums kept cutting out. Separating the two with panning did not help. In addition, I recorded buzz for both fighters, and when played together they were awfully disruptive.

My solution? I created an option with Select.ar for a second doppler multiplier with a range below 1. The Jedi would solely retain the veteran hum, and now the Sith would have a lower frequency range hum combined with the buzz to represent the gnarly, darker side of The Force. Now it is easier to tell who the heck is who. After finally recording some more takes for Jedi and Sith, I could get right down to mixing and editing.

SynthDefs for saber sounds in the later stage of the production

Operation: “Fix It In Post”

Of course, Adobe Audition CC is a popular choice for mixing and editing. Tis’ quite versatile, as I may bestow upon it movie sound, or recorded musical performance. However, Waveform 11 Pro had something special that Adobe Audition CC did not: Multi Sampler. Like John, I took advantage of the MIDI-emulating resources of Waveform 11 did to compile my clash SFX library. That way, I could add in my clash sounds with the press of keys and the click of the pen tool, as opposed to sluggishly dragging them into the DAW and cumbersomely editing.

Multi Sampler with my clash SFX

I felt like I had unlimited power, but there was an unforeseen cost. I did have lossless audio, but I also suffered the loss of space. With almost twenty 32-bit clash sounds imported in the multi sampler, along with all my 32-bit SuperCollider recordings, there was definitely not enough space to fit my samples into a galaxy far, far away. I did not fly into any glaring struggles with the gargantuan-sized files until the end. Exporting, compressing, and uploading were the big troubles.

Anyways, after writing in most of my clash sounds, I still couldn’t hear a fight scene. The swings of the fighters’ weapons did not sound in unison despite my best efforts. Everything felt scattered and random. I had no choice but to spam the slash key and scramble my spliced saber clips. I really hoped to keep my samples fresh in one take. The process felt like performing excessive surgery on my samples but it is still a component that makes sound editing so essential. After some more scrambling, I finally got a decent sounding lightsaber fight.

Like a song, I have different sections of the duel. I want my sounds to tell a story, but I can only do so much with the samples I had. My course of action was better than recording alongside the movie scene, but in over three minutes, “vwoom” and “pssshhh” sounds get boring. My cat agreed when I played it to her. She fell asleep.

Therefore, I had to prioritize the movement of the fight throughout the space. Soundscape and spatialization are extremely crucial for movies. Mix mode activated! I automated panning to give the fight some realistic motion. I have the fighters start on opposite sides, panning one hum left and the other to the right. By use of automation I move them to the center. To give the fight more unity, I fed the hums and clashes to another track (a substitute for aux send/return) with more automated panning to indicate movement of both of them on the battleground fighting. For transitions in the fight sections, I recorded some footstep sounds, a punching sound for when the Jedi is punched by the Sith, and the flapping of a shirt to convey the Jedi subsequently flying through the air.

Jedi track with automated panning
Saber hums and clashes are fed into this track for more automated panning

Some filters were added on individual tracks. The buzz from the Sith saber overpowered the Jedi saber, so I added some EQ to boost the Jedi track. The clashes sounded too crisp, so I threw on a bandpass filter. Since a few clashes “clashed” with my ears while a few others were at reasonable amplitude, I placed a compressor to ease the sharp difference.

The final step was to create the distance (close/far) impression of the soundscape. I looked back at Huber and echoed this diagram.

From the Huber reading on mixing basics

Since I had to fiddle with volume, I exported my project with normalized audio and imported it into a completely new Waveform project for mastering purposes.  My changes in depth of field are only at the end of the duel. I automated some volume reduction,  EQ frequency decrease,  reverb wet mix increase, and lastly some delay mix for an improved difference.

I was but the learner. Now I am the master.

In Short?

As you can see from my process I had to change my plans several times. Similar to Josh’s project, my original plan was to use purely physical foley to replicate a scene from The Empire Strikes Back (1980). The final plan was to use a mix of SC3 and foley to construct a fight scene envisioned in my head. Petersen forewarned that sound for movies can require a lot of trial and error. A sound “scene” can be much harder to design for when there isn’t even a scene at all! Oddly enough, I found some of the miscellaneous foley to be the most difficult. I never realized how exact the sounds had to be for realism. When listening to the starting footsteps I imagined two people with extremely stiff legs taking baby steps towards each other instead of two trained fighters. I kept this in because I felt it was satisfactory, and it is also a good way of noting how unnecessarily difficult foley can be.

It would be an overstatement to call the designed duel product below “the tip of the iceberg”. It can be difficult to enjoy the final result by itself, so I am contextualizing my piece here as much as I can in this blog. Working on SFX put a lot on my plate, but it is the “fun” kind of work. Once the pandemic is over I will be having more fun with foley in the campus sound studios!

Use headphones (NOT AIRPODS) for best experience. Also close your eyes if you want to imagine the duel in your head.

Tribute to David Prowse, the OG Darth Vader.

Additional Sources

Xfo. (n.d.). Ben Burtt – Sound Designer of Star Wars. Retrieved December 13, 2020, from http://filmsound.org/starwars/burtt-interview.htm

 

Final Waveform Project: Bronze

 

I hope you enjoyed that mp3 of my track. For my final 035 project, I integrated Supercollider with Waveform to produce some electronic music. I achieved this by generating random(ish) midi data in supercollider and feeding it into waveform for one of my lead synths.

Screenshot of Final Session (its a combination of both of these^)
Screenshot of Final Session (its a combination of both of these^)

Sample Hunting

As per usual, I began by searching through my sample libraries for interesting samples. I was looking for both intriguing atmospheric samples that would inspire my music, as well as atmospheric samples that would provide some nice background noise. I had never experimented with randomness in my music before, so I was a bit unsure of myself. Pretty early on, however, I found that the samples that would work best would be ones that present a musical key, yet weren’t restrictive (they would harmonically work well with every note within that key). I also selected a set of drum and percussion samples, as well as a melodic bell sample I made earlier this year in Logic.

Intro:

After opening Waveform, I dragged my atmospheric samples into the track and began developing ideas about their relative placements in the timeline. Before too long, I decided on a key and matched the BPMs of my samples. Some of the samples I didn’t alter at all, other than adding effects. A couple of plugins I used frequently were Traktion’s Bit Glitter and Melt.

As I mentioned in my previous two waveform projects, I really love these two plug-ins — when used together, they can warm up and almost “break down” a sample. Other samples I put into a multi-sampler patch, chopped up, and used to play melodic motifs using my midi keyboard.

Example of Sample I mapped in Multi-Sampler

I also added some perc loops, some of which I chopped up. The point of this section was to introduce the material with which I’d be working for the rest of the track, as well as set the mood. Thus, towards the end of the intro section, I introduced a pitched down and warmed up version of the main lead melody of the drop — a trimmed vocal sample I tossed in a sampler and played using my midi keyboard. Another notable aspect of this section is a flute line, which I played using a sampler. 

Moving over to SuperCollider

At this point in the process, I moved to SuperCollider to generate semi-random midi. This process was composed of two parts — writing the actual random code, and setting up a line of communication between SuperCollider and Waveform. Professor Petersen shared with me instructions on both of these steps.

In terms of generating random data, I wanted Supercollider to pick notes that were within the scale with which I was working, rather than notes within a given note interval, a slight variation on the code that Professor Petersen shared with us. I achieved this by creating an array including the notes in my preferred scale. Then I used inf.do and .choose to pick midi notes from the array at random. I attached a picture of the code below. 

Setting up a line of communication between SuperCollider and Waveform was rather painless, actually. I sent the output of my supercollider code through the IAC driver and then set the driver as the input for a Subtractive patch. Therefore, I could initialize the code in Supercollider and the generated midi would immediately be heard in Waveform.

The plucky Subtractive patch you here at the beginning and end is the midi I ended up using. I recorded about two minutes worth of midi and picked a section I thought would work well. 

First Drop 

By far, I spent more time on this section than any other. To start, I pulled up my bell percussion sample. I had a rough idea of what I wanted the section to sound like, drawing inspiration from Sam Gellaitry and Galimatias. The sample was already set to 130 BPM, which is the BPM in which I often like to work, so I didn’t have to do much manipulation of the sample other than chopping it at times to create a stutter effect. In terms of effect plugins, I added a bit crusher to make the sample more percussive. Therefore, it wouldn’t clash as much with the other melodic aspects of the track. 

Next, I experimented with some vocal chopping and developed the chop countermelody that exists now. I knew this wasn’t the lead, but rather a complementary element.

Following this, I organized my percussive elements. The drum groove itself is pretty simple — I placed a clap on beats 2&4, and the hi-hats, which I recorded at the end, are sparse. Later on, I wanted more width in the track, so I automated panning the hats left and right. To highlight the bass hits, I coupled my 808s with kicks. I knew I wanted my 808’s to be a focal point of the piece, so I initially opted for a sample that was roundish and had a good amount of higher frequency presence. I noticed in my previous tracks that my subs had unnecessary low-end, so I threw on a high pass filter to do some light correction. A couple of days later, however, I ended up trimming some of the mid and high frequency of the bass. Aside from these melodic and drum elements, I spent a decent amount of time on little embellishments. These included chimes, sub-bass slides, risers, and impacts, which helped fill out the section. 

 

Main Slide Lead

Surprisingly, I didn’t actually write the main slide lead until the very end of working on this section. I tried a lot of different sample chops and leads for the main melody, but none of them stuck until I came across the one you hear now. Initially, the melody was actually an octave down. It became apparent, however, that there was too much mid-frequency clutter, so I moved the melody up an octave. Because of the strange timbre of the lead, moving it up an octave led to some mixing issues. What ended up doing the trick was a high pass filter paired with an eq which took out some mid frequencies and accentuated the highs.

First Drop

 

B Section

In this section of the piece, I tried to depart from the choppiness of the first drop by incorporating two sustained pads, alongside a traditional lead. For my pads, I edited two instances of Subtractive and played a chord progression in the relative minor. I spent some time messing with the filters in subtractive to place the pads correctly in the mix. I also used a variety of effect plugins, including chorus, phaser, and reverb. I used some of these effects purely for their aesthetic, while I used others for their utility in the mix. Stereo fx and widener, for example, made room for key elements in the section (the lead and the bell loop).

Main Lead

I had a lot of fun with the main lead in this section — I was able to use my Arturia keylab, which has a pitch mod wheel. Editing the pitch mod wheel data post-recording turned out to be a bit unintuitive, but I’m happy I incorporated it. I found my patch was a bit boring, so I added a phaser with a very slow rate and very little feedback, which only slightly altered the sound, yet brought it to life. Other than that, the fx rack is pretty standard.

Main Lead
Main Lead
Main Lead

Bass

I decided to switch up the Bass in this B section for sake of variety. Not only did I change the Bass pattern, but I also used a different sample. I chose a subbass that fell more in the low-frequency range than the former, so it wouldn’t compete with my pads.

Other Elements

I brought back the bell loop in this section; this time I boosted some mid frequencies using an EQ to make the loop even more percussive. Other than that, I kept the same basic drum pattern, replacing the clap with a snare for variation. I also reintroduced a sample chop from the beginning to relate the intro to the b section. 

Final Section

Final Pads

I wanted to wrap things up nicely in the last drop by combining elements from both the A and B sections. I kept the same structure and feel as the A section — choppy and fx-focused –while also reintroducing the chord progression from the B section using two new Subtractive patches. The first patch was a high-frequency phaser pad, which has a weird but cool envelope. The second, a standard juno-ish pad I added for strength, for the first pad had a weak tail. At the very end of the track, I reverted back to a mysterious and dark variation of the intro. I primarily achieved this by using reverb and Melt to alter the samples.

Risers: pitch shifting and filter automation

One thing to note is that I had a really fun time experimenting with risers and section transitions. In most cases, I used traditional white noise risers, but I also developed other techniques to complement and even replace this fx. For example, before the last drop, I automated a high pass filter on a synth pad to grow anticipation. This can be heard at minute 3:00. Another technique I developed was pitch shifting. In some instances, I put a pitch shifter over my pad or sample and drew-in a linear upward automation right before the drop, which achieved a makeshift riser of sorts. The last method I used applied the “Redux” filter in Subtractive. I found that by opening a subtractive lead patch and inputting a single midi note, I could then assign and automate a redux filter cutoff, resulting in a grainy, distorted sound, making for an interesting riser. This can also be heard at minute 3:00. Oftentimes, I combined two or more of these methods for my section transitions.

Redux Automation
Pitch Automation

Some Other Things I Did

-side chain compression. I used sidechain on a variety of tracks in this project. I did, however, break the habit of putting sidechain on my bass. As Kenny Beats (and others) has reminded us many times, don’t side chain the 808s.

Final Mix of the Master

I did a lot of mixing alongside my writing process, which paid off in the end. The only manipulations I did to the master were to add a limiter and an EQ, accentuating the high-end and cutting off some low-end. I’ve noticed that my headphones lean towards the brighter side, so I often end up making these tweaks.

I enjoyed producing this track. I was a bit hesitant at first, because the genre of music that I usually attempt to produce doesn’t seem at first glance to be conducive to the type of random midi generation that I know Flume, for example, incorporates in his music. Giving Supercollider a scale from which to choose did the trick, however, and I’m quite happy with how things turned out!

Homeland: My Final Waveform Project Made Entirely of My Recorded Samples (because I’m insane)

My final project, which I’ve named “Homeland”:

Quick note before getting into the post: I’m only now realizing how long this is, and the amount of audio and video clips and screenshots I’ve included is kind of insane. So feel free to skim through, or just read the concept section below to understand my concept.

Concept

 

As many of you probably know by now, I love using my own recorded samples in my songs. Something about taking an everyday object and transforming it into its own instrument is so fun and fascinating to me, so for my final project, I wanted to take it a step further, and challenge myself to not use a single MIDI or electronic sound in this project. All my sounds would be sampled from everyday objects around my house, or from a live instrument like my piano or ukelele.

Preliminary ideas for sounds I could use in my house!

Running off of that concept, I sought to also create a song with more of a meaning going in, as opposed to my last two projects. I eventually settled on splitting my song into two sections: one outdoors, and one indoors, and each comprised of everyday sounds and samples from only either outdoors or indoors. For example, I couldn’t use the sound of a pot boiling for my outdoors section, or leaves rustling for my indoors section. I decided that I wanted the outdoors section to be louder and more chaotic, reflective of the chaos of the outside world and society as a whole, and the indoors section to be quieter and more intimate, sort of like life at home – the feeling of being wrapped up in a blanket next to a fireplace. Throughout both sections, I had a few main instruments as a constant, mainly the piano, guitar, and my voice (yikes), sort of representing how music is the bridge between the two worlds for me as a form of expression and communication. Real deep, I know.

 

Structurally, I eventually designed the song as a whole to begin with a bang with the outdoors section, then slowly warp into the middle indoors section, which eventually builds into a dramatic, intense climax that combines elements of both the outdoors and indoors. A pretty standard arc, but I did find it to be a challenge to effectively and smoothly transition between the sections, given the great contrast in both sound and mood.

My entire Waveform project!

Recording

To achieve all of this, I logically had to start by recording. And with only my measly iPhone 7 at hand (although with the excellent Voice Record Pro app), I was nervous (I ordered a mic for home, but too late…). I certainly didn’t have a shortage of sounds around me, both in my house, and in my backyard and surrounding woods, where it had just snowed. The challenge was to 1) narrow down the sounds that were actually useful, and 2) produce good quality recordings of them. This was difficult, especially with wind, trains, cars, and my neighbor’s lawn mower outside. Even inside, it was hard getting my family to be silent as I scurried around like a madman, tapping kettles and pans against the kitchen table. But I did try to use some of the close mic techniques I learned from researching my iPhone mic all the way back in Homework 2, so that certainly helped. Over this week of work, I eventually recorded over 84 samples of everyday objects and instruments (not including maybe twice as many of mess-ups and second takes). After a painstaking process of deciding which sounds to use and for what, I ended up using maybe less than a third of those. Welp.

Some of my samples!

Behind Each Track

 

I’m not sure how exactly to do this in a way that’s easiest to follow, but I thought I’d go through each of my tracks in the song and go into the background of all the recorded sounds and samples that went into recording them. Back again by popular demand are the original sounds, video reenactments, Waveform screenshots (when available, I wish I had taken more when I was making each individual sound for the drum samplers) and the end products. Unfortunately, I can’t include all 84 videos, although they are fun to look through for myself.

 

Outdoor Drum Sampler

All of the sounds in this sampler were taken from a fun walk my sister and I took outside in the woods near our house, and in our backyard. I used mostly low pass/high pass filters, reverb, and compressors plug-ins to manipulate these sounds. I still do not have a MIDI keyboard, so I had to make do with my Qwerty keyboard to input my beats.

Kick: This is simply me dropping a rather large rock I found into a patch of dead grass. It was hard recording it without also picking up the brush of grass and snow, but after many, many takes, we got a good one. 

Snare: This is a combination of me hitting a solid block of snow with our shovel (the thump), and my shoes crunching in the snow (the tzzz of the wires). Also a tough one to record well and to manipulate into something resembling a snare.

Hi-Hat: This is me breaking a small dead branch. Surprisingly the branches in our area all broke rather dully, except for this single clean break that I managed to pick up.

Crash: This is me smashing an icy piece of snow against a rock. I now admit I may have cheated my indoor/outdoor rule (only on this one though!) and layered sounds of me ripping paper and drawing with a Sharpie to make the sound more convincing.

Toms: This is me smashing an icy piece of snow against the backboard of my basketball hoop. I pitch shifted it to create a high, mid, and low tom.

 

Indoors Drum Sampler

Again – a lot of low pass/high pass filtering, reverbing, delaying, chorusing, and compressing here.

Kick: This is me hitting a large candle against our carpet floor. It achieved the deep, muffled sound I was looking for.

Snare 1: This is me clicking a small wooden jewelry box from my mother against our marble kitchen counter.

Snare 2: This is a combination of me slamming our metal kettle against the kitchen counter (for the metallic, icy sound I wanted) and me slamming my toilet seat shut (to beef up the actual hit of the snare).

 

Hi-Hat: This a combination of me clicking two marbles together (for the main sound), and me flipping the light switch on my lamp (for more depth and fullness to the main sound). In general (for both outdoors and indoors), it was difficult to control the sound of the hi-hat, as it often cut through everything and stood out like a sore thumb. Eventually, with some slight panning (as hi-hats often are) and added plug-ins on the original sample, I was able to remedy this issue.

Pedal Hi-Hat/Brush: I started out envisioning this sound like a brush, but eventually it turned into something resembling a pedal hi-hat. This was me spraying our bottle of alcohol spray. 

 

Pianos

I recorded piano parts for both the outdoors and indoors sections. I had to do a lot of experimenting and adjusting in terms of where to place my phone to record, and at what angle, to get just the right effect I wanted. The outdoors isn’t that interesting, I just applied a lot of reverb, chorus, and some phaser to give the illusion of the piano being played outside (combined with my outdoors ambience). The indoors “piano” is a little more interesting, as I mimicked the sound of a synth using heaving low pass filtering and chorus-ing, and by reversing each clip of the chords I played. I had to use a LOT of automation on this, as the nature of reversing the chords and cutting and splicing clips created a lot of pops, which I had to conceal by manipulating the volume. What a pain. Someone please tell me how to copy automation points. 

 

Guitar/Ukelele

Not the most interesting part given that I can’t play guitar and had to again break my self-imposed rules and dig into the Smooth R&B Guitar sample library again. I used pitch shifted snippets and spliced pieces of two main loops, putting in high pass filters to make the guitar sound more tinny and distant when it first comes in, as well as reverb later on at the end for a Western-y effect. But for the random ukulele outro that I added as a whim a day before this project was due, that was actually recorded by my sister. But she’s also not the greatest at it, so we recorded each individual chord and note on their own while looking up chords online and I pieced it together from there (along with high pass filtering and reverbing to make it sound more distant and outro-y).

My sister and my makeshift studio, complete with online chord diagrams

Bass

Both the indoors and outdoors bass were just me playing on the lower register of the piano. I really wanted to find a natural sound that could mimic it—I tried the hum of my microwave, my printer, my electric toothbrush, but nothing would sound right. So I gave up and just ran my piano through a lot of plug-ins, notably a low pass filter and this cool distortion plug-in called Bass Drive to make it sound convincing. Using the piano did make it easier for me to play more complex patterns and basslines in one take rather than having to piece every note one by one with an everyday sound. This track was also one of two instances (the other being the ukelele tracks) of me using a bus rather effectively! Yay!

Ambience

I used four main ambient recordings of my own: a general recording of the outdoors (leaves rustling, birds chirping, all that) to create an outdoors atmosphere for the beginning and also to transition into the ending, a recording of my mother’s soup boiling to create a warm home-y feeling for the beginning of the middle indoors section, and a gong-like effect created with a combination of me tapping two wine glasses together and ringing my sister’s singing bowl (which is usually used for meditation) to signal the transition from the outdoors intro into the middle, quiet indoors section.

Lead Fake Out-of-Tune Trumpet

Now this is my favorite. I struggled with the right instrument/sound to play my lead melody for the longest time. I really did try everything, including my electric toothbrush again, my sister making a farting noise with her hand, door hinges creaking, assortments of bleeps and bloops from machines around the house, both nothing seemed to either fit, be capable of producing a melody, or resemble anything like an instrument. So I gave up and just sang it, and then heavily distorted it and filtered it, among other plug-ins, to make it somewhat resemble an out-of-tune trumpet. I really did try to sing in tune, and tried to use pitch shifting to correct some of it, but I eventually just decided to let it be in all of its imperfect glory. Something about it being slightly off rhythm or out of key at times is perfect for me in this song, especially given that everything around it is rather polished and in key. There’s a metaphor here that I won’t try to hard to force about me and my place in this world being just like this out of tune fake trumpet, yada yada. I’ll leave it up to you.

 

Mixing and Mastering/Final Product

 

I spent a LOT of time on the fine details of this track compared to my previous projects, especially with mixing, which would make it even more embarrassing if the mixing were bad. I paid attention to mixing throughout along the way rather than doing it all at once at the end, as well as other detail oriented tasks I’ve usually done at the end, including EQ’ing, panning, randomizing velocities for MIDI, getting rhythms precise with quantizing for MIDI and entering specific measure values for everything else, creating automation curves, editing the samples themselves within drum samplers, etc. 

 

I also did a lot of listening through and obsessing over painstakingly small details, also sending drafts to friends throughout so that they could pick up on details I might’ve missed, even after taking a day off to rest my ears. A snare hit that may have been slightly too loud. A guitar strum that seemed to clip a little bit as it was cut off. A piano chord that seemed to be just a little bit off rhythm. An ambient sound effect that could be panned just a little more to the right. Stuff like that. I exported the entire song several times, and each time listening to it out loud, through headphones, and through a speaker. I was VERY careful (and was throughout the entire process) to make sure nothing was redlining so as to avoid clipping or distorting in the final exported versions, given that this was a problem with my last project, even though I also paid a lot of attention to mixing and mastering then. I also uploaded it into Audacity just to make sure the amplitudes were in check, and also added a compressor/limiter to my master track to be safe. 

An example of helpful suggestions from friends on details I might’ve missed!
Double checking for clipping on Audacity

Conclusion

 

Generally, I am satisfied with how it all turned out. I spent a lot of time on this, and I think I’ve come up with a piece of work that has meaning to me, and is pretty unique in terms of its concept and sound. It’s really made me appreciate just how much can be achieved without having much access to real instruments like drums, and how there is music in many everyday objects in our lives, waiting to be unlocked and found (how poetic). I had a few goals from my last Waveform project for this final, and I think I achieved most of them:

1) import sampled sounds to create my own drum sampler with more unique sounds that I have more control over (and can actually apply plugins to) – check (very glad I was able to achieve this as it was so cool and fun)

2) employ buses more, just for convenience sake – check (somewhat? But also didn’t need them as much)

3) figure out the balance between the lead and beat mixing-wise more, as I feel like I may have overcompensated a little bit with this project (as last project, I had problems with the beat overpowering my lead), and struggled with the balance this time again between the guitar and beat – check (I definitely had a far better sense of mixing, and actually went back to edit my samples within the drum samplers to achieve this balance)

5) experiment with more changes and variety in chord progression, so it isn’t just the same progression over and over again (this would require me to rely less on samples) – check (definitely! Compositionally this piece was far more all over the place and complex in a good way I hope….)

 

Finally, it’s been such a good time learning along with all of you this semester, and to hear all of your amazing work. It’s so cool to be able to get to know people that are passionate and talented in so many different ways, and I’ve been so inspired. Thank you also (especially) to Professor Petersen for being a fun, cool, and understanding professor through these unpReCEdENteD TimES. Hopefully we all stay in touch and maybe can see each other in an actual studio/classroom at some point.

 

Platformer Sounds in SuperCollider

Platformers are a genre of games that involve heavy use of climbing and jumping in order to progress. Examples include Super Mario Bros, Hollow Knight, VVVVVV.

~Inspiration~

Over Thanksgiving break, I played a lot of games… maybe a little too much! This project is heavily inspired by a few games. For example, I picked up Hollow Knight which is a fun yet frustrating single-player platformer game. The sounds of jumping, using items, ambient sounds, and swinging a nail followed me to my sleep. I thought that I could try replicating some of them for my final project.

Ambient Sounds

Snowy Environment

The first piece of the sound pack I started working on were the ambient sounds. These would be used to indicate the environment the current level of the game. I began by creating a snowy feel using the LFNoise1 Ugen in a SynthDef.

 

 

 

At first, I had trouble configuring the audio signal to work with the envelope. At first, I only had the attack and release time defined for the percussive envelope. This caused the audio rate to be released linearly, which I did not want. Instead, I wanted the sound to be at the same sound level until the end where it should level off. To remedy this problem, I used the curve argument of Env and set it to 100 instead of the default -4.

Here’s the resulting sound clip:

Rainy Environment

Moving on to the next ambient sound I created, I decided to introduce a rainy environment. I used a similar approach to creating the instrument used for the snowy environment. Instead of using a low pass filter, I opted to use a high pass filter instead. I also changed the frequency to more accurately capture the sound of hard rainfall.

 

 

 

After that, I applied the reverb effect to it using the Pfx class.

Background Music

For the background music, I wanted to experiment with some orchestral sounds. It was pretty difficult trying to get a convincing SynthDef for strings, so I looked at sccode for ideas. I found that someone made some code that mapped samples to midi notes in the link here so I used that as the basis for the granular SynthDef which uses samples provided by peastman. It uses the buffer to load the instruments. Here’s the code that maps the instrument samples to MIDI notes. It’s a long block of code so you’ll have to look at the image in another tab to view it.

Now that the instrument has been defined, I decided that I want the music to have some chromatic notes to produce an unnerving sound. Something that would be played during a final boss fight. I configured it to use the default TempoClock at 60 bpm and I kept it simple with a 4/4 time signature. It is mainly in E major, but as I mentioned, there’s some chromatic notes like natural Cs and Ds. Here’s the resulting clip.

Walking

Moving on to the sounds that would be triggered by events, I started by creating the walking sound. I looked back at the sound_fx.scd file to follow this piece of advice given:

Wise words

I recorded some sounds from a game and put them into ocenaudio and Audacity to use their FFT features and here were the results from ocenaudio, highlighting only the portion of a single footstep sound.

ocenaudio footstep FFT.

I noticed that frequencies around 1000hz and below were the most prominent, so the frequency of the footstep should probably be emphasized around there. The audio recording includes pretty loud ambient sounds, so that probably explains the higher frequencies.

I attempted to replicate this sound using a SinOsc Ugen and a Line to generate the signal. I used the Line for the frequency argument of the SinOsc because it allows me to have more flexibility knowing that a footstep does not have a constant frequency. I configured the envelope to start at 300 and end at 1, resulting in this footstep sound which I ran through an infinite Ptpar.

SynthDef(\walking, {
	var sig;
	var line = Line.kr(200, 1, 0.02, doneAction: 2);
	sig = SinOsc.ar(line);

	Out.ar([0,1], sig * 0.6);
}).add;

~soundFootstep = Pbind(
	\instrument, \walking,
	\dur, Pseq([0.3],1)
);

(
Ptpar([
	0, Ppar([~soundFootstep], 1)
], inf).play;
)

Jumping

For the jump sound effect, I wanted to have a bubbly type of sound. I used a similar approach to the footstep SynthDef, but I made the Line rise in frequency instead of descend. Also, I made the envelope and Line sustain significantly longer.

ocenaudio jump FFT

I took note of the FFT of the jump sound from the game, but it didn’t result in the type of sound I wanted. It was really harsh so I modified the frequencies a bit resulting in this sound clip

SynthDef(\jumping, {|time = 0.25|
var sig;
var env = EnvGen.kr(Env.perc(0.01, time), doneAction: 2);
var line = Line.kr(100, 200, time, doneAction: 2);
sig = SinOsc.ar(line);

Out.ar([0,1], sig * 0.8 * env);
}).add;


~soundJump = Pbind(
\instrument, \jumping,
\dur, Pseq([2],1)
);


(
Ptpar([
0, Ppar([~soundJump], 1)
], inf).play;

)

Landing

I had a GENIUS idea of continuing to use the Line class to create my signals. You’ll never guess how I achieved the landing sound effect. Well, you might. I just swapped the start and end frequencies of the line. It ended up making an okay good landing sound, which sounds like this

I tweaked a little more and changed the end frequency to be lower at 10, which I think sounds a bit better.

SynthDef(\landing, {|time=0.1|
var sig;
var env = EnvGen.kr(Env.perc(0.01, time), doneAction: 2);
var line = Line.kr(200, 10, time, doneAction: 2);
sig = SinOsc.ar(line);

Out.ar([0,1], sig * env);
}).add;


~soundLand = Pbind(
\instrument, \landing,
\dur, Pseq([2],1)
);


(
Ptpar([
0, Ppar([~soundLand], 1)
], inf).play;

)

Picking up an item

When I brainstormed what kind of sound to give the item pickup, I thought of Stardew Valley and its sounds, specifically the harvesting. I used this as an excuse to take a break from homework to play the game for research purposes. The sound of picking up items sounded pretty simple. Here’s what I came up with using this code

SynthDef(\pickup, {|freq|
var osc, sig, line;
line = Line.kr(100, 400, 0.05, doneAction: 2);
osc = SinOsc.ar(line);
sig = osc * EnvGen.ar(Env.perc(0.03, 0.75, curve: \cubed), doneAction: 2);
Out.ar([0,1], sig * osc * 0.8);

}
).add;

~soundPickup = Pbind(
\instrument, \pickup,
\dur, Pseq([1],1)
);

(
Ptpar([
0, Ppar([~soundPickup], 1)
], inf).play;

)

Throwing away an item

For the final sound that I created for this project, I made something to pair with picking up an item. I wanted it to have a somber tone that elicits an image of a frown. The disappointment of the item being thrown emanating from itself. As sad as the fact that this class is ending. Anyways, here’s the sound!

SynthDef(\throw, {|freq|
var osc, sig, line;
line = Line.kr(400, 50, 0.2, doneAction: 2);
osc = SinOsc.ar(line);
sig = osc * EnvGen.ar(Env.perc(0.03, 0.75, curve: \cubed), doneAction: 2);
Out.ar([0,1], sig * osc * 0.8);

}
).add;

~soundThrow = Pbind(
\instrument, \throw,
\dur, Pseq([1],1)
);

(
Ptpar([
0, Ppar([~soundThrow], 1)
], inf).play;

)

Reflection

Doing this project has made me more appreciative of the sound engineering of every game I play. I now find myself analyzing the different sounds that the developers choose to incorporate and sit in thought how they made that. There’s surely some, but not many, that use programming languages like SuperCollider to synthesize such sounds. Exploring new concepts like buffers has been pretty challenging, but it has shown me that there’s a wide range of choices available with SuperCollider.

Thanks for reading my writeup. Hope you enjoyed!

Waveform Final Project: Masks

With this final composition, my main composition goal was to create a bittersweet piece that I myself would return to if I ever feel a little down in the future. From a technical perspective, I aimed to utilize all the skills I had acquired from the Supercollider module into my song.

Currently, my default “simp song” is Pressure by Draper.

This song’s balance between sadness and motivation works wonders with encouraging me to carry on in tough times. Musically, I’d pinpoint the reason behind its effectiveness to the fullness of it’s sound. Before taking CPSC035, I never noticed all the harmonies and pads in the background that fill in this song’s blank space; I always just heard the lead, bass, and percussion. Though I am still unable to pinpoint the exact number of tracks and their respective notes throughout the song (although I suppose that’s also sign of a good pad), I certainly planned to have ample padding in my musical composition. Another aspect of this song that I wanted to carry over to my song was the tranquil break between powerful drives.

With respect to the more technical side of this song, there are a few things I learned throughout the SuperCollider module (some are SuperCollider related, others are just me pondering about Waveform in the shower):

  1. You don’t have to find the perfect sample for what you’re looking for. I specifically struggled with this in my first two Waveform Projects, spending hours trying to find just the right Kick, Snare, and Hihats, before being left dissatisfied an just using the 808 and 909 drum kit samples instead. Yet in SuperCollider, the fact that I was able to program base and snare sounds from just simple waves a few effects proved to me that the sample only has to be close to what is desired, after which using an abundance of effects is completely acceptable
  2. Use automation curves for apply effects to certain notes. In my last waveform project, I really struggled with the applications of automation curves because I was never able to quite figure out how to automate effects parameters. Turns out, I have to first drag it into the effects chain before being able to select the specific parameter of automation (I thought you had to make the automation curve first, then map it to a parameter). Now with the ability to automate effects parameters, I was able to selectively apply effects to certain notes. For example, if I wanted to add reverb to only the last notes of a musical phrase, I could use an automation curve to turn the reverb’s wet level down to 0 for all notes except the last note.
  3. Use automation curves to give sounds more character. One really cool thing we learned in SuperCollider is how we could use oscillators to modulate certain parameters on an effect. Because of this, I also aimed to use automation curves to mimic the oscillator effect on some of my parameter plugins
  4. Envelopes: To be completely honest, I didn’t really have a full understanding of how envelopes functioned, and what the difference between adsr and perc envelopes were. Yet through Supercollider, the whole concept of treating an envelope like a time-based function that modifies its input signal really helped me understand. The most helpful was the assignment where we had to create our own subtractive synth: while having to juggle the whole envelope difference between adsr and perc was really frustrating, it undoubtedly helped me understand envelopes in general

The first thing I knew I needed to do in this song was to separate my different percussive instruments onto different tracks. On last waveform song, I decided to use just the multisampler as an easy way to get by with a genre of music that usually had a repetitive bass, so I did not expect myself to do any percussive automation. However, I eventually ran into the problem, albeit too late, that using the multisampler meant that any plugin or effect I wanted to add to one instrument would have to be added to the rest of the percussive instruments. Thus, this time I created separate tracks for the kick, snare, hihat, and claves, allowing me to also set their own filters and automation tracks so that the percussion would have more life.

The 2nd step in my song was to find minor chord progression that would set the tone of the song. The chord progression I ended up going with was c minor -> g minor -> Ab major -> Bb major, which follows the i-v-VI-VII progression. Right away, I wanted to test the capabilities of using automation curves on more than just pan and volume, so I decided to have incorporate a bass drop immediately in measure 10. Yet more than just a gradual buildup in sound, I also wanted have a gradual buildup shift in the EQ. In order to apply this to both my Viola and Low Pad, I used a bus that both the channels fed into, and applied a 4-band EQ with automation in the bus.

When thinking back to what made the bass drops in Draper’s Pressure so effective, I noticed that the main contrast was the amount and level of padding . Yet, since I already used a padding in my buildup, I thought, “Meh, I’ll add another Pad.” Because the point of this padding was to be almost like the lead post-drop, I had placed in the sweet-spot of frequencies that our ears are most sensitive, which is to say around 400-800 Hz. However, having both the Lead and the pad be the same instrument was quite a problem, because the pad had to be adsr but having such a powerful sustaining lead was actually kind of painful to listen to. Thus, in order to emphasize the attack of the lead even more than just adjusting the adsr levels, I added another track built from 4OSC, which had sine and triangle waves within a percussive envelope, repeating the same notes as the pad to give a more distinctive character to the lead.

Despite the many parts of my liquid drums composition that I disliked, one aspect I wanted to keep was the high-pitched secondary melody complementing the lead. Since the doubling of my pad with my lead led to a more boring lead, having these bells as a secondary melody complemented very well throughout the chorus. In this case, however, I enjoyed the sound having the secondary melody being centered.

With both the 4OSC and the high pitch adding character to the low pad, I decided to use another approach to add character to the lo pad: automation (that I described in point 4 in the envelope). I chose to automate the 4-Band equalizer in similar fashion to what I did with the bus during the build-up, but this one was solely on the lo pad (without the viola). I first tried a linear oscillation, but I soon realized that such an oscillation was not good because frequencies are not a linear in nature (the whole 1/wavelength thing), so curvature was necessary for the intended effect. I also incorporated a pan automation that creates something that resembled a little bit like the Doppler Effect.

As mentioned early, I also really wanted this song to incorporate a rest in the middle of the chorus. Though the rest I had incorporated in my last song was alright, I was adamant about being more meticulous about the fade and the build-up beyond just one decrescendo and one crescendo. My central misconception was that quiet did not necessarily mean boring; there was still room for fun plugin automation and effects in the rest. The idea I eventually came up with was to increase the volume of my  Lo pad up to the beginning of each measure, only to have it fade out immediately while also using the hihat to slowly fade out, similar to a delay.

The most challenging part for me was definitely the 2nd bass drop; I wanted to make this bass drop even more grand than the first, so one heartbreaking compromise I decided to make was to decrease the overall volume of first bass drop just a bit to give more room for the second bass drop amplitudes. From the juxtaposition between the first bass drop/ chorus and the rest, I noticed that there was certainty a complementary juxtaposition between silence and sound; since juxtapositions run both ways, I decided that I’d add a beat of silence just before the drop to creative an even more dramatic emphasis on my second drop. In addition to this, I also added drums immediately at the drop, using a percussive drum beat that resembled that of dubstep and hip-hop (since dubstep always has the strongest drops). This section is also where I decided to add panning to create even more fullness to the room. For my high secondary melody, I had each note pan between completely left and completely right. Finally to add even more volume to my sound, this was the only section that I incorporated a dedicated low bass track.

Below is a screenshot of my entire Waveform, and the mp3. I decided to name this song Mask: just as many people go to a mask-like facade during difficult times, I will go to this song.

Final Reflection thoughts:

Overall, I am really proud of this piece; to me, what’s most important was that I could say I really did show my best work here; the largest problem I had previously with filling in silence behind a melody was solved because of the pads I used throughout the song, and at no point did I, as a listener, feel like I was bored. I have yet to have another down-in-the-dumps type of mood since creating this song, but I know that I’ll at least give it a listen when the time comes: just hopefully not too close to finals. As far as further questions and improvements, one that already comes to mind is this issue I commonly have with inconsistent sound levels of Sine wave sounds: when experimenting with pure sine waves I could never get the track to play at a consistent level, and adding more tracks to it when only made the sine wave more and more distorted and quiet. Yet another issue I had was how sometimes there’d be popping and cracking despite the fact that I’m not redlining and that my sound envelopes were not attacking too fast. For improvements, I’d next try to be able to add more audio samples to my tracks that I record, whether that be just a piano or percussive sound effects. When peering at some of the pre-loaded projects on waveform, I noticed that almost none use Waveform MIDI to create sounds, but instead use Waveform for its plugins and for mixing.