Homeland: My Final Waveform Project Made Entirely of My Recorded Samples (because I’m insane)

My final project, which I’ve named “Homeland”:

Quick note before getting into the post: I’m only now realizing how long this is, and the amount of audio and video clips and screenshots I’ve included is kind of insane. So feel free to skim through, or just read the concept section below to understand my concept.

Concept

 

As many of you probably know by now, I love using my own recorded samples in my songs. Something about taking an everyday object and transforming it into its own instrument is so fun and fascinating to me, so for my final project, I wanted to take it a step further, and challenge myself to not use a single MIDI or electronic sound in this project. All my sounds would be sampled from everyday objects around my house, or from a live instrument like my piano or ukelele.

Preliminary ideas for sounds I could use in my house!

Running off of that concept, I sought to also create a song with more of a meaning going in, as opposed to my last two projects. I eventually settled on splitting my song into two sections: one outdoors, and one indoors, and each comprised of everyday sounds and samples from only either outdoors or indoors. For example, I couldn’t use the sound of a pot boiling for my outdoors section, or leaves rustling for my indoors section. I decided that I wanted the outdoors section to be louder and more chaotic, reflective of the chaos of the outside world and society as a whole, and the indoors section to be quieter and more intimate, sort of like life at home – the feeling of being wrapped up in a blanket next to a fireplace. Throughout both sections, I had a few main instruments as a constant, mainly the piano, guitar, and my voice (yikes), sort of representing how music is the bridge between the two worlds for me as a form of expression and communication. Real deep, I know.

 

Structurally, I eventually designed the song as a whole to begin with a bang with the outdoors section, then slowly warp into the middle indoors section, which eventually builds into a dramatic, intense climax that combines elements of both the outdoors and indoors. A pretty standard arc, but I did find it to be a challenge to effectively and smoothly transition between the sections, given the great contrast in both sound and mood.

My entire Waveform project!

Recording

To achieve all of this, I logically had to start by recording. And with only my measly iPhone 7 at hand (although with the excellent Voice Record Pro app), I was nervous (I ordered a mic for home, but too late…). I certainly didn’t have a shortage of sounds around me, both in my house, and in my backyard and surrounding woods, where it had just snowed. The challenge was to 1) narrow down the sounds that were actually useful, and 2) produce good quality recordings of them. This was difficult, especially with wind, trains, cars, and my neighbor’s lawn mower outside. Even inside, it was hard getting my family to be silent as I scurried around like a madman, tapping kettles and pans against the kitchen table. But I did try to use some of the close mic techniques I learned from researching my iPhone mic all the way back in Homework 2, so that certainly helped. Over this week of work, I eventually recorded over 84 samples of everyday objects and instruments (not including maybe twice as many of mess-ups and second takes). After a painstaking process of deciding which sounds to use and for what, I ended up using maybe less than a third of those. Welp.

Some of my samples!

Behind Each Track

 

I’m not sure how exactly to do this in a way that’s easiest to follow, but I thought I’d go through each of my tracks in the song and go into the background of all the recorded sounds and samples that went into recording them. Back again by popular demand are the original sounds, video reenactments, Waveform screenshots (when available, I wish I had taken more when I was making each individual sound for the drum samplers) and the end products. Unfortunately, I can’t include all 84 videos, although they are fun to look through for myself.

 

Outdoor Drum Sampler

All of the sounds in this sampler were taken from a fun walk my sister and I took outside in the woods near our house, and in our backyard. I used mostly low pass/high pass filters, reverb, and compressors plug-ins to manipulate these sounds. I still do not have a MIDI keyboard, so I had to make do with my Qwerty keyboard to input my beats.

Kick: This is simply me dropping a rather large rock I found into a patch of dead grass. It was hard recording it without also picking up the brush of grass and snow, but after many, many takes, we got a good one. 

Snare: This is a combination of me hitting a solid block of snow with our shovel (the thump), and my shoes crunching in the snow (the tzzz of the wires). Also a tough one to record well and to manipulate into something resembling a snare.

Hi-Hat: This is me breaking a small dead branch. Surprisingly the branches in our area all broke rather dully, except for this single clean break that I managed to pick up.

Crash: This is me smashing an icy piece of snow against a rock. I now admit I may have cheated my indoor/outdoor rule (only on this one though!) and layered sounds of me ripping paper and drawing with a Sharpie to make the sound more convincing.

Toms: This is me smashing an icy piece of snow against the backboard of my basketball hoop. I pitch shifted it to create a high, mid, and low tom.

 

Indoors Drum Sampler

Again – a lot of low pass/high pass filtering, reverbing, delaying, chorusing, and compressing here.

Kick: This is me hitting a large candle against our carpet floor. It achieved the deep, muffled sound I was looking for.

Snare 1: This is me clicking a small wooden jewelry box from my mother against our marble kitchen counter.

Snare 2: This is a combination of me slamming our metal kettle against the kitchen counter (for the metallic, icy sound I wanted) and me slamming my toilet seat shut (to beef up the actual hit of the snare).

 

Hi-Hat: This a combination of me clicking two marbles together (for the main sound), and me flipping the light switch on my lamp (for more depth and fullness to the main sound). In general (for both outdoors and indoors), it was difficult to control the sound of the hi-hat, as it often cut through everything and stood out like a sore thumb. Eventually, with some slight panning (as hi-hats often are) and added plug-ins on the original sample, I was able to remedy this issue.

Pedal Hi-Hat/Brush: I started out envisioning this sound like a brush, but eventually it turned into something resembling a pedal hi-hat. This was me spraying our bottle of alcohol spray. 

 

Pianos

I recorded piano parts for both the outdoors and indoors sections. I had to do a lot of experimenting and adjusting in terms of where to place my phone to record, and at what angle, to get just the right effect I wanted. The outdoors isn’t that interesting, I just applied a lot of reverb, chorus, and some phaser to give the illusion of the piano being played outside (combined with my outdoors ambience). The indoors “piano” is a little more interesting, as I mimicked the sound of a synth using heaving low pass filtering and chorus-ing, and by reversing each clip of the chords I played. I had to use a LOT of automation on this, as the nature of reversing the chords and cutting and splicing clips created a lot of pops, which I had to conceal by manipulating the volume. What a pain. Someone please tell me how to copy automation points. 

 

Guitar/Ukelele

Not the most interesting part given that I can’t play guitar and had to again break my self-imposed rules and dig into the Smooth R&B Guitar sample library again. I used pitch shifted snippets and spliced pieces of two main loops, putting in high pass filters to make the guitar sound more tinny and distant when it first comes in, as well as reverb later on at the end for a Western-y effect. But for the random ukulele outro that I added as a whim a day before this project was due, that was actually recorded by my sister. But she’s also not the greatest at it, so we recorded each individual chord and note on their own while looking up chords online and I pieced it together from there (along with high pass filtering and reverbing to make it sound more distant and outro-y).

My sister and my makeshift studio, complete with online chord diagrams

Bass

Both the indoors and outdoors bass were just me playing on the lower register of the piano. I really wanted to find a natural sound that could mimic it—I tried the hum of my microwave, my printer, my electric toothbrush, but nothing would sound right. So I gave up and just ran my piano through a lot of plug-ins, notably a low pass filter and this cool distortion plug-in called Bass Drive to make it sound convincing. Using the piano did make it easier for me to play more complex patterns and basslines in one take rather than having to piece every note one by one with an everyday sound. This track was also one of two instances (the other being the ukelele tracks) of me using a bus rather effectively! Yay!

Ambience

I used four main ambient recordings of my own: a general recording of the outdoors (leaves rustling, birds chirping, all that) to create an outdoors atmosphere for the beginning and also to transition into the ending, a recording of my mother’s soup boiling to create a warm home-y feeling for the beginning of the middle indoors section, and a gong-like effect created with a combination of me tapping two wine glasses together and ringing my sister’s singing bowl (which is usually used for meditation) to signal the transition from the outdoors intro into the middle, quiet indoors section.

Lead Fake Out-of-Tune Trumpet

Now this is my favorite. I struggled with the right instrument/sound to play my lead melody for the longest time. I really did try everything, including my electric toothbrush again, my sister making a farting noise with her hand, door hinges creaking, assortments of bleeps and bloops from machines around the house, both nothing seemed to either fit, be capable of producing a melody, or resemble anything like an instrument. So I gave up and just sang it, and then heavily distorted it and filtered it, among other plug-ins, to make it somewhat resemble an out-of-tune trumpet. I really did try to sing in tune, and tried to use pitch shifting to correct some of it, but I eventually just decided to let it be in all of its imperfect glory. Something about it being slightly off rhythm or out of key at times is perfect for me in this song, especially given that everything around it is rather polished and in key. There’s a metaphor here that I won’t try to hard to force about me and my place in this world being just like this out of tune fake trumpet, yada yada. I’ll leave it up to you.

 

Mixing and Mastering/Final Product

 

I spent a LOT of time on the fine details of this track compared to my previous projects, especially with mixing, which would make it even more embarrassing if the mixing were bad. I paid attention to mixing throughout along the way rather than doing it all at once at the end, as well as other detail oriented tasks I’ve usually done at the end, including EQ’ing, panning, randomizing velocities for MIDI, getting rhythms precise with quantizing for MIDI and entering specific measure values for everything else, creating automation curves, editing the samples themselves within drum samplers, etc. 

 

I also did a lot of listening through and obsessing over painstakingly small details, also sending drafts to friends throughout so that they could pick up on details I might’ve missed, even after taking a day off to rest my ears. A snare hit that may have been slightly too loud. A guitar strum that seemed to clip a little bit as it was cut off. A piano chord that seemed to be just a little bit off rhythm. An ambient sound effect that could be panned just a little more to the right. Stuff like that. I exported the entire song several times, and each time listening to it out loud, through headphones, and through a speaker. I was VERY careful (and was throughout the entire process) to make sure nothing was redlining so as to avoid clipping or distorting in the final exported versions, given that this was a problem with my last project, even though I also paid a lot of attention to mixing and mastering then. I also uploaded it into Audacity just to make sure the amplitudes were in check, and also added a compressor/limiter to my master track to be safe. 

An example of helpful suggestions from friends on details I might’ve missed!
Double checking for clipping on Audacity

Conclusion

 

Generally, I am satisfied with how it all turned out. I spent a lot of time on this, and I think I’ve come up with a piece of work that has meaning to me, and is pretty unique in terms of its concept and sound. It’s really made me appreciate just how much can be achieved without having much access to real instruments like drums, and how there is music in many everyday objects in our lives, waiting to be unlocked and found (how poetic). I had a few goals from my last Waveform project for this final, and I think I achieved most of them:

1) import sampled sounds to create my own drum sampler with more unique sounds that I have more control over (and can actually apply plugins to) – check (very glad I was able to achieve this as it was so cool and fun)

2) employ buses more, just for convenience sake – check (somewhat? But also didn’t need them as much)

3) figure out the balance between the lead and beat mixing-wise more, as I feel like I may have overcompensated a little bit with this project (as last project, I had problems with the beat overpowering my lead), and struggled with the balance this time again between the guitar and beat – check (I definitely had a far better sense of mixing, and actually went back to edit my samples within the drum samplers to achieve this balance)

5) experiment with more changes and variety in chord progression, so it isn’t just the same progression over and over again (this would require me to rely less on samples) – check (definitely! Compositionally this piece was far more all over the place and complex in a good way I hope….)

 

Finally, it’s been such a good time learning along with all of you this semester, and to hear all of your amazing work. It’s so cool to be able to get to know people that are passionate and talented in so many different ways, and I’ve been so inspired. Thank you also (especially) to Professor Petersen for being a fun, cool, and understanding professor through these unpReCEdENteD TimES. Hopefully we all stay in touch and maybe can see each other in an actual studio/classroom at some point.

 

Last Weekend: My Second Struggle with Waveform and (First) Struggle with MIDI

Brainstorming/Samples/MIDI

 

To start out, I had a general mood/vibe in mind—something more upbeat and hard-hitting than my last, more mellow piece. I began with how I began last time, sifting through SSLIB samples and loops. As much as I tried to branch out and try different genres and categories of samples (including dubstep), I still gravitated back towards my favorite R&B realm guitars. I guess some things never really change.

 

After marking up all the samples and choosing a prominent guitar sample I wanted to base the entire track around, I started to look at what MIDI instruments would fit well with the overall vibe. I started with the beat, and used the 808s drum sampler (except for the snare, for which I used the better sounding 909s) to construct a beat that fit the general rhythms and cadences of the guitar loop. At first, it took a LONG time to figure out how to use the QWERTY keyboard, as I don’t have an actual keyboard, but once I did, it was pretty easy to record in a deconstructed way. I then ran into another challenge where I didn’t realize that plug-ins had no effect on the sampler and its pre-determined sounds, which made for a very confusing hour as I struggled and began to question my own hearing. I then went through the Subtractive and 4OSC libraries and selected my favorite sounds to fill out the rest of the track. Eventually, I ended up with 1 Subtractive bass, 1 Subtractive ambient pad, and 1 4OSC synth-y bass that fit well with the beat and the lead guitar.

My practically illegible notes with samples and structure

With my main sounds in mind, I now began to map out a plan/structure for my song. Whereas in my last track, I built into a climax and gradually stripped everything down, for this track, I wanted to start with high energy and intensity, then strip it down into a “distorted,” “underwater” section that was shifted down in both tempo, pitch, and with a low-pass filter, and then end with a punch with a high-energy section that brought together elements from both the beginning and the middle.

My overall project!

Recording Sound

 

This was, as usual, my favorite step of the process. Again, I had no professional microphones or studio setup, so I still used my phone and the Voice Record Pro app. This time, I ventured out of my room to my entryway hallway, and even my bathroom! While this led to the possibilities of more sound, it also led to the greater possibility of background noise, random interruptions, and awkward interactions with people passing by as you kneel on the ground, holding your microphone to your entryway door hinge.

 

This time, I went in with some sounds in mind. I had two percussive sounds in mind that I thought would add texture to the beat I had already constructed—namely tapping a paper cup against a table to simulate a wood block, and shaking a Ibuprofen bottle (a bottle that seems to come up a lot in these projects) to simulate a shaker. I also had more atmospheric ones for the slower, “underwater” section, including literal water, or more specifically the spraying of my shower head. I also recorded some vocal licks that I planned on pitching and filtering to create cool effects (those were a late-night addition, so I had to record in our computer lab, which, while isolated and not disruptive to anyone else, had background noise from the air conditioner). 

 

There were some unplanned sounds that I eventually grouped together to form transitions, or brief breaks in the beat, namely the sound of my entryway door hinge squeaking open and shut, and then slamming against the door frame, and also the sound of my popping open my suitemate’s San Pellegrino soda, pouring out, and letting fizz. To add to the “drink” sequence specifically, I later found a champagne pop sound on an online sound database, as, unfortunately, we do not have a champagne bottle in our suite.

 

Back by popular demand, here are the original, untouched recordings I made, along with some helpful video reenactments (there are some that I did not want to do twice just for video, namely opening the soda).

The soda that I popped open, poured and fizzed.

Waveform: MIDI, Plug-Ins, Automation

 

As usual, I had to pitch shift and tempo shift my recorded sounds to make them fit together. But more daunting was editing my MIDI recordings. I eventually did figure out how to quantize, and in some cases, manually adjust each note to be rhythmically consistent. I also had to adjust velocities (especially in the synth MIDI instruments) to make them sound more natural and less plodding. 

 

With my samples and MIDI clips ready, it was time to apply plug-ins and automation to bring them together. I stuck with the usual plug-ins: HP/LP (I used low-pass EXTENSIVELY in the middle, “underwater” section), chorus, reverb, delay, compressor, and EQ. And as for automation, I created tracks/curves for volume, pan, pitch shift, and a lot with the low-pass filter, especially when transitioning between the normal/high-energy and “underwater” sections. I also learned how to manipulate the tempo curve for a ritardando (and whatever the opposite of that is) effect to transition between the high-energy and “underwater” section. Again, I’ll outline the piece by its main building blocks.

The extremely helpful tempo curve at the top that created a ritardando effect
Hard to see, but the automation curve for the low-pass filter during the transition from the “underwater” back to the high energy section

Sound Effects

As I mentioned, these were mostly for transitions this time around. I used them sparsely, but when they did show up, some of them were, at times, the only track that was playing. At the beginning, since I didn’t want the high-energy beat and guitar just to drone on, I added “breaks” every so often to ease the tension and to add some spontaneity to the piece. Essentially, I took out most of the tracks and left a bar blank for a sound effect, whether it be door hinges creaking and then slamming close, or a drink popping open, pouring, fizzing, then popping again (the sequence doesn’t make sense). I added reverb, chorus, compression, and (as for every track, so I won’t mention it any more beyond this) EQ plugins to add more dimension to the rather flat iPhone recordings. I especially had to work on the door slam and champagne pop having that, well, pop to really punctuate the break in the beat. Besides these transitional sounds, I also had a shower, water sound effect constantly looping (fittingly) throughout the “underwater” section, to which I applied a low-pass filter, of course, and reverb to give it more weight. 

The sound effect transition/breaks

Beat

As I said before, I assembled the main beat from its deconstructed parts (snare, kick, tom, and hi-hats), and had to find out the hard way that you cannot add plug-ins to a drum sampler. Nevertheless, I made the adjustments I could, whether it be cutting off the snare sample in the 909s sampler (the 808s snare just sounded so…flavorless) to make it more crisp, or panning the hi-hats a little bit to the right to create space in the center for the kick and snare. And as I said before, I added percussive recorded samples to provide texture to the sampler and give it a more human feel (which even changing velocities couldn’t achieve), including the plastic cup pitched up, compressed, chorused, and low-pass filtered to mimic a wood block, and the Ibuprofen bottle compressed, filtered, and slightly tempo-shifted to mimic a shaker.

The main drum sampler beat

For the “underwater” portion, I used a sampled beat that had roughly similar cadences as the beat I had constructed myself. I did this partially out of laziness, but mainly because the texture of the sampled beat was far more suitable for the “murkiness” of the section than the sampler that I had access to. As for all the other parts of the “underwater” section, I used a low-pass filter ESPECIALLY heavily on the beat to really drive home the murkiness. I experimented with dynamics filters, but none were noticeable with such a heavy low-pass filter, so I saw no point in adding it just for the sake of adding one. 

 

Lead/Main Accompaniment

The constant undercurrent throughout the entire track is a punchy, high-energy guitar loop. Throughout the entire song, I applied chorus and compressor to give it that extra punch and kick. Midway through, in the underwater section, it undergoes a complete transformation as I applied a heavy low-pass filter (just as with the beat).

 

The guitar track is accompanied, in the first high-energy section, only by a MIDI bass track, to which I applied a chorus plug-in to give it more depth and airiness. But in the second, “underwater” section, I needed more tracks to fill in a lot of the empty space created by the slower tempo, and low-pass filter. I did so with an atmospheric, ocean-y (water theme continues!) pad that oscillates and rises and falls, which is extenuated by a delay plug-in. I also added a 4OSC synth bass to add more texture, which I had to apply a heavy low-pass filter to, so it was not too overpowering. When I transitioned back into high-energy, and introduced these two tracks from the “underwater” section in a new context, I had to change the Subtractive pad to be more beefy and hold its own amongst everything else going on.

 

Miscellaneous Accompaniment

There’s really only one random element that pops up once in a while, and that’s my vocal tracks. There is one quick lick that serves the same transitional purpose as the door slamming shut and the bottle pop, during a break in the beat during the first high-energy section (this one I left intentionally a little out of tune and off beat, just for the ~quirkiness~). Then, there is another more repeated vocal lick (a two-part harmony in fifths) that I pitched up significantly, and scattered throughout to fill in some empty spaces. To both vocal clips, I applied a low-pass filter, reverb, chorus, and EQ plugins to reduce background noise and make less “human” into a more mysterious type of sound. Both vocal tracks I fed into a main vocal effects bus, so I didn’t have to go through and add/change these plugins for every single clip and transition. I had to do some figuring out (with help from Iris in class) to learn to actually use a bus, but now that I know, I will definitely be using it far more in the future, as it would’ve saved me a lot more time in this project had I applied it earlier as well.

 

Mixing and Mastering/Final Product

 

I paid far more attention to the mixing component of this track than I did to my previous piece, partly because we’ve learned more, but also because there are far more tracks and transitions that all need to be balanced and reined in. I found that EQ’ing as I went through each track definitely made it so I didn’t have to do as much work from a broader standpoint at the end. I did find that going into the “mixer” layout/view in Waveform really helped me to visualize and easily make adjustments as I listened through. This did backfire, as when I tried to switch back to the normal view, everything had shifted and the tracks were super skinny that I couldn’t click on them, and then the side plug-in tab was no longer there, and so on. A whole mess.

The wonderful mixer view before everything got messed up

But anyway, this time, I tried listening through with headphones, without headphones, and through a Bluetooth (yuck) speaker, to try and account for different listening experiences. I also listened at different volumes to hear the different tracks in action. I then exported the track and dragged it into a new project to make adjustments as a whole (which honestly, besides some light EQ’ing, were not that much at all). I also found that the audio that plays within the project can be vastly different from the audio that comes out after exporting. For example, the pill track, which was prominent in the project, became way duller and quieter after exporting, which I had to re-export several times to try and fix (it still, for some reason, isn’t as loud as I want it to be!). Another example is in the “underwater” section, as I automate the low-pass filter, the murky beat, at the end as it is about to transition back into the high-energy section, starts blasting and slightly blowing out the headphones, something that I did not hear at all in the project, and again, had to re-export to fix (and still could not get rid of all of it).

 

But overall, I generally satisfied with out it all turned out. There were a lot of moving parts and structural sections, and I’m proud of how I was able to, for the most part, pull it all together. If I had more time, I would have made a few more adjustments, both to the track and my overall process:

1) import sampled sounds to create my own drum sampler with more unique sounds that I have more control over (and can actually apply plugins to)

2) employ buses more, just for convenience sake

3) flesh out my ending, especially the transition back into the high-energy section more, so it doesn’t end as suddenly (although I keep on going back and forth on whether I like the sudden end – I do feel like it has its pros and cons)

4) figure out the balance between the lead and beat mixing-wise more, as I feel like I may have overcompensated a little bit with this project (as last project, I had problems with the beat overpowering my lead), and struggled with the balance this time again between the guitar and beat

5) experiment with more changes and variety in chord progression, so it isn’t just the same progression over and over again (this would require me to rely less on samples)

 

Here is my final track, which I’ve named “Last Weekend,” because, listening through, I think I may have inadvertently mapped out the progression of a typical Friday night on a college campus in the song:

Iridescence: My First Go at Waveform

Brainstorming/Samples

 

To start out, I spent a few hours searching through the SSLIB for samples. Essentially, I just listened through every single one, while marking in a notebook which samples sounded nice. My favorites were in the R&B realm, with the keys, guitars and beats from that section really catching my eye. Some others, not so much for my taste (basically anything dubstep or deep electronic bass-y house).

 

Then, I went through all the samples I had marked out, and then listened through them again, beginning to make links – which ones I could see going together, which ones had similar chord progressions, which ones complemented each other in terms of timbre or overall vibe, and so on. I eventually narrowed all the samples down to my favorite 10 – two keyboard ones, two guitar samples, two beats, two ambient/atmospheric sounds, one bass track, and one nice synth-y accompaniment. Soon, I began to map out a plan for my song (specifically two possibilities of mood/structure) – including a rough outline of distinct sections and the instrumentation/dynamics of each. I eventually settled on a concept of gradually layering more and more tracks and elements to slowly build up into a climax, and then stripping everything back down again.

 

Recording Sound

 

I didn’t really have anything in mind going into this process. I don’t have access to any instruments, so I just looked around my dorm room and looked for anything that caught my eye, or anything I thought would have an interesting sound. I also have no professional microphones, so I just used my phone and the Voice Record Pro app, using the techniques that I had investigated for Homework 2, and also everything I had learned from Huber and class lectures. I recorded in my room, because where else would I go. Unfortunately, I’m on the first floor on a busy street, and my suite is always full of people coming in and out, but I did my best to make the recordings as clean as possible. 

 

I eventually found five sounds that I thought would be unique and offer a nice variety: dropping Ibuprofen pills on a handheld mirror, opening a plastic water bottle, pouring water from my suitemate’s Brita filter into a metal water bottle, pulling tissues out of my tissue box, and jingling my keys on my keychain. Later, encouraged by my suitemate who said that I had a nice voice in the shower, which I don’t think is true but whatever, I also recorded myself singing a potential vocal accompaniment to the tracks that I had already laid out in Waveform.

 

Here are the original, untouched recordings I made, along with some helpful video reenactments of each.

Bottlecap Video

 

Waveform: Plug-Ins and Automation

 

I first imported my samples and recordings into Waveform, splicing them and matching some of them by tempo and key. There were some difficulties in terms of making them match, so I did have to go back and search for new samples that would fit better. I also went through the great challenge of getting Waveform to play sound from my AirPods (since I have no normal headphones, thanks Apple). Whenever I selected my AirPods as the output source, the sound would come through incredibly muffled and low quality. Eventually, I gave up and stole one of my friend’s headphones.

 

With my samples and recorded sounds imported, it was time to apply plug-ins and automation to bring them together. The six plug-ins I mainly used were: HP/LP (mainly using low pass, though it took a while for me to figure out how to toggle between HP and LP), chorus, reverb, delay, compressor, and phaser. And as for automation, I experimented mostly with volume, pan, and a little bit with reverb. I’m not sure what the best way to go through my process is, but I figured I would outline it by the main building blocks.

My overall project

Sound Effects

These first show up in the introduction, which consists of three parts: an atmospheric drone sample, the water pouring recording, and the keychain recording.  I layered a reverse version of the drone on the original version, creating a nice symmetrical intro. Then, I layered the slowed-down keychains (with reverb and a low pass filter to clean up the sound), and the water effect with a low pass filter (to make it sound cleaner), phaser, and chorus (both to add more texture to the sound), all to create a nice, sensory build-up to the main track. I used automation to have the volume gradually build up, and for the drone to pan from one ear to another, adding to the symmetrical effect. The keys also come back at several points throughout the sound (although a little sped up and pitched up) to add ~flavor~. The atmospheric drone also was useful at a few other points to create a buildup to ensure a smooth transition.

The intro sound effects

Beat

For the first part of the song, I assembled a “beat” out of several of my recorded sounds. This was because my sampled beat (which was actually a combination of two different beats) was too aggressive and intrusive for the calmer beginning. So I used my bottlecap sounds as hi-hats, and then combined my tissue and my Ibuprofen pill noises (sped-up significantly) as a ~crunchy~ snare. I added a compressor on each sound to make them more crisp. Rhythm wise, I tried to match the general cadences and hits from the sampled beats. (Side note: there must be an easier way to mass select clips to copy and paste, right? For this makeshift beat, I had to select individual clips over and over to copy and paste, as I wasn’t sure how many more loops I was going to be doing. Let me know please so I don’t have to suffer in the future.)

The beat overall (in red and yellow)
The beat up close (in red and yellow)

For the second, more “climax” part of the song, I layered my two sample beats on top of my makeshift beat from the first part. Since I had matched the cadences earlier, the three fit together very well. I slowly increased reverb with automation in the latter part of the climax, just to ramp up the energy a little more and to open up the track.

 

Lead/Main Accompaniment

The constant undercurrent throughout the entire track is a vibey keyboard loop. It sets the tone for everything else, including the lead guitar loop, which I had to pitch shift down and slow down to match the keyboard tempo. The guitar loop did NOT match the overall vibe at first, because even though I like the melody, and it fit with the chord progression, it was very dry and bland. To fix this, I applied reverb and chorus to add more texture to the melody, and a low-pass filter to distinguish it as the lead from the accompaniment. And under it all, I put in a simple bass loop that fit well with the lead.

 

Miscellaneous Accompaniment

Besides the recorded sound effects sprinkled throughout, I also needed to beef up the accompaniment in the climax, to add more dimension to the backing track. To do so, I added a nice synth-y/keyboard loop that fit well but didn’t overpower the other tracks. I wanted to add a synth pad to add to the atmosphere, but couldn’t find one that fit with the chord progression and overall vibe. This was also the perfect place to insert my simple vocal accompaniment that I had recorded. I created five layers, with one in the original pitch, and four an octave up (I tried to create harmonies with thirds and fifths, but my intervals were off and I didn’t have time to re-record), and applied the chorus, delay, reverb, and HP/LP filters to create a sound that still was human, but rather ethereal and distant. I loved the vocals so much that when I stripped the climax down, removing the beats, then the bass, then the keys, I left only the guitar lick and the vocals, which I think provided a powerful contrast to everything that had come before.

My layered vocal accompaniment

Final Product

 

Overall, I’m really happy with the way that it turned out! If I had more time, I would’ve worked on mixing a little more, just tweaking it to make sure everything is at a good balance volume-wise. I would’ve also worked on expanding upon the climax to do something more radical with the outro. But all in all, given that this was my first try at creating music with a DAW of any kind, I’m pretty satisfied, and can’t wait to hear everyone else’s work as well!

 

Here is my final track, which I’ve named “Iridescence,” because I think it’s a cool word:

Experimenting with Recording on the iPhone 7

Experimenting with Recording on the iPhone 7

 

Suffice to say, I haven’t gotten a new phone in almost 4 years, and the phone that I do have has been battered and bruised in all sorts of ways. So I was a little skeptical at first about its recording capabilities. But after some digging through online forums and archives (as Apple, annoyingly, removes specs for any products that they no longer sell), I was pleasantly surprised by the microphone specs of the iPhone 7.

 

The iPhone 7 has 4 microphones in total: two on the bottom on either side of the Lightning charger port, one on the back next to the rear camera, and one inside the built-in earpiece. The 4 mics all work together for purposes of noise cancellation and beamforming (locating the source of a sound), and all can be used to record (with the iOS automatically selecting the most appropriate microphone for recording, such as the rear mic for shooting video). The microphones can all record in stereo, and with default Apple apps, are limited to recording at 16-bit with a maximum sample rate of 48 kHz. 

A diagram of the bottom microphones for the iPhone 6S, which closely resemble those on the iPhone 7

 

Researching Recording Apps

 

Finding a suitable recording app was definitely challenging, as the only app I’ve ever used to record is Voice Memos. Apparently, Voice Memos settings can be changed to produce lossless audio, but not at the maximum sample rate, not in a WAV/AIF format, and with no AGC control. Thus, I turned to Google, eventually finding two possible apps on a birdwatching forum, out of all places. I definitely trusted these apps, as the quality and precision of audio required to record bird sounds in the wild are far higher than recording sine waves in my bedroom. 

RODE Rec LE’s horrible interface and reviews

The two apps recommended throughout the forums were RØDE Rec LE and Voice Record Pro, which both can produce lossless audio as WAV files, at 48k/16-bit, and have options to disable auto gain control. After perusing the app store, I found that RØDE Rec LE had absolutely abysmal reviews, mostly because of its disgustingly archaic interface and lack of flexibility in export options, so I turned quickly to Voice Record Pro. This turned out to be the right choice, as Voice Record Pro was clean, easy to use, and extremely functional in customizing the presets for this project (pictured below).

Voice Record Pro and my presets

Data Collection and Observations

 

In Audacity, I first generated 15 seconds of white noise at a constant amplitude of 0.8, then a 15-second sine sweep/chirp at a constant amplitude of 0.5, ranging from 100 Hz to 18 kHz. Then, using Voice Record Pro and my pre-configured settings, I recorded both sounds at maximum volume, holding the microphones on the bottom of my phone towards my computer speakers, at about 6 inches away.

The original white noise generated by Audacity
The original sine wave generated by Audacity

I recorded in my dorm room, which was the only feasible location I had, as recording sine waves and white noise in the library probably would’ve been a little disruptive. With my door closed, the background noise generally was at a minimum, except for when my new suitemate decided to move into his room right in the middle of recording. The playback of the audio was also decent, even through my beaten up, five-year-old Macbook Air speakers.

 

I also varied two other factors to see their effects on the resulting spectra. First, I varied the volume of the sound that I played from my computer. Using the arbitrary bar that shows up when you adjust the volume on a Macbook, I had originally played the sound at a maximum 16 bars. Now, I played it at 5 bars, then 10 bars. These were the resulting spectra:

The sine wave recorded at max volume
The sine wave recorded at a volume of 5
The sine wave recorded at a volume of 10
The white noise recorded at max volume
The white noise recorded at a volume of 5
The white noise recorded at a volume of 10

Then, I varied the direction that I faced the microphone at the bottom of my phone. Originally, I had faced it directly towards the computer. Now, I faced it away from the computer (towards me), then downwards at the computer keyboard.

The original recording facing towards the computer
Recording facing away from the computer
Recording facing down towards the keyboard

These were the resulting spectra:

 

The sine wave recorded facing towards the computer
The sine wave recorded facing away from the computer
The sine wave recorded facing down towards the keyboard
The white noise recorded facing towards the computer
The white noise recorded facing away from the computer
The white noise recorded facing down at the keyboard

My final observations from the data were pretty straightforward and expected. The lower the volume of the playback was, the more erratic and uneven the curves were. The greater the volume, the more even, as sound was most likely more consistently reaching the microphone. The direction that I faced the microphone didn’t make as much of a difference as I thought it would, but in general, facing the microphone straight at the speakers intuitively produced a louder and more even curve. In general, probably as expected, I would say the most consistent, “best” recording comes when the microphone is directly facing the speaker, and the playback volume at the maximum. This, however, is a fairly presumptuous statement that probably would require more extensive testing and more variables to completely prove. And in the end, even though the iPhone 7 clearly was not close to replicating the original generated sounds, when recorded in the right manner and in the right environment, it certainly can achieve decent standards of recording quality.

For the original, generated audio files and all the recorded audio files, along with the exported spectra txt files for each recording, please see this Google Drive link: https://drive.google.com/drive/folders/1PDsPzO_fW1rauO1aDrZet5JyQPlVv4Zt?usp=sharing

 

Sources

https://apple.stackexchange.com/questions/251930/wheres-iphone-7-microphone-located#:~:text=The%20iPhone%207%20has%20four,the%20speaker%20grille%20as%20well.

https://apple.stackexchange.com/questions/276905/what-audio-quality-is-available-on-newer-iphones#:~:text=There%20appears%20to%20be%20no,16%2Dbit%20at%2048%20kHz.

http://www.tomasdabas.eu/tutorials/record-quality-voice-over-using-smartphone/

http://www.rode.com/software/roderecle

https://www.bejbej.ca/app/voicerecordpro

https://support.ebird.org/en/support/solutions/articles/48001064305-smartphone-recording-tips

https://www.macaulaylibrary.org/resources/setting-up-recording-apps/setting-up-recording-apps-for-ios-devices/

The Extraordinary Evolution of Video Game Music (and the Brilliance of Nobuo Uematsu)

Just like most kids of my generation, I grew up playing video games, whether it be on the Playstation or Nintendo DS. Strangely, one of my favorite parts of playing was always the music. Whether it be the simple, catchy earworm of the Super Mario Bros. theme, or the epic, sweeping score of Halo, I’ve always been fascinated by all that video game music achieves, both technologically and compositionally. To me, that is why video game music, including groundbreaking composers like Nobuo Uemats, has played an integral role in computer music history—not just because of all the technological innovation to overcome the limitations of early video game hardware and software, but also the compositional creativity that arose as a result of these limitations.

 

Video game music, while perhaps most closely related to film scores, has inherent limitations in terms of the its medium, besides all technical restrictions. For one, the music needs to actively involve and interact with the player, and move dynamically without clear beginnings or ends. It needs to smoothly transition between themes when a player changes areas, or when the tone of the game shifts, with continuous looping often being employed to this effect.

This dynamic quality first came in the form of straight up sound effects, beginning with the game Pong in the early 1970s, which relied on a distinctive, onomatopoeic to punctuate gameplay. Up to this point, most video games were silent, and sound was often a forgotten element of game design. But soon, music became a subtle but powerful tool to manipulate a player’s mood and overall gaming experience. This could be seen in Space Invaders, which had music that gradually sped up as aliens got closer to increase tension and the players’ heartrates, or in Tetris, which took inspiration from a Russian folk song to create a looping theme that soon became inseparable from the game itself.

Despite this, video game music was still significantly limited by technical restrictions. Every single note of the music had to be transcribed directly into the computer code, requiring close coordination between programmers and composers. Limitations in memory made it so that composers had to find ways to loop the music without making it too repetitive, often putting the same theme into new keys and registers. Limitations in tonal range led to an avoidance of harmony, or the use of less common intervals such as minor seconds. 

 

But the true transformation of video game music from mere sound effects to, well, music came in 1985, with the Nintendo Entertainment System (NES). It included a custom sound chip that was equipped with 5 channels, and Japanese composers immediately seized this opportunity to define the sound of video game music forever. Koji Kondo created a simple but instantly iconic melody for Super Mario, while Koichi Sugiyama experimented with more orchestral harmonies for Dragon Quest. And finally, in 1987, Nobuo Uematsu composed the music for Final Fantasy, instantly revolutionizing video game music for decades to come.

Kondo’s music was completely melodic, with no harmony. Sugiyama sacrificed melodic movement for richer, but more stagnant harmonies. Uematsu found a way to put melody and harmony together, channeling powerful emotion and creating themes that instantly stuck in your head. His style was eclectic, drawing on everything from John Williams’ cinematic flair, to the classical undertones of Bach, to the 70s rock stylings of Led Zeppelin and Pink Floyd, to a random smattering of jazz and even Celtic Music. 

And this was all under the limitations of the early 5 channels: two pulse waves for melody, a triangle wave for simple bass and percussion, white noise for more metallic percussion, and digital sampler for other assorted effects. Uematsu’s creativity could be seen in the “Prelude” of Final Fantasy I, which was composed at the last minute to fill in an additional scene. He uses audio tricks to simulate the feeling of background chords, and also uses an ⅛ second delay in one of the channels to create the illusion of texture and various voices.

https://www.youtube.com/watch?v=8wZoJQABWI8

By 1991, with the release of the Super Famicon with 8 sound channels, Uematsu took full advantage. With increased memory and more volume control, he began to demonstrate more versatility in Final Fantasy IV. He began to incorporate distinct motifs for characters and even abstract themes, using live orchestral instruments such as the harp (with a slight Celtic flair). From the elegant, delicate “Theme of Love,” to the thumping, chaotic “Battle with the Four Fiends,” Uematsu had begun to build an entire world out of his music alone.

In 1994, the CD-ROM disk was introduced to the PlayStation, with a whopping 24 sound channels. With this new CD audio, along with other improved technologies of 3D graphics and Full Motion Video (FMV) in the corresponding game, Uematsu released his magnum opus with Final Fantasy VII. This is one of the most famous entries in the series, with “One-Winged Angel” resembling a rock opera with its booming percussion, newfound textures in the strings and horns, and an epic chorus behind it all. It’s epic, dramatic, and outstanding as a technological feat and a work of art, outside of any video game.

https://www.youtube.com/watch?v=t7wJ8pE2qKU

But even with all of this pomp and fancy technology, Uematsu never lost track of what made his music compelling in the first place. Because once all the drums and choirs are stripped away, at the core of his music, despite being completely electronic, lies a sense of human emotion, which is best displayed in “Aerith’s Theme,” a beautiful blend of classical opera, cinematic flair, and Celtic influences. It’s sparse and deceptively simple, evoking feelings of nostalgia and wistfulness, as well as an uneasy tension between light optimism and dark foreboding. 

https://www.youtube.com/watch?v=fIqKWLkm2-g

Despite having world-famous pieces, performed live all over the world for enormous crowds by groups like the LA Philharmonic, Nobuo Uematsu extraordinarily achieves, time and time again, the original purpose of video game music, stemming all the way back to Pong and Space Invaders: to make players, sitting alone in their room in the middle of the night, feel something. 

https://www.youtube.com/watch?v=a3LkKviuGKU

Sources:

Fritsch, Melanie. “History of Video Game Music.” Music and Game, 2012, pp. 11–40., doi:10.1007/978-3-531-18913-0_1.

McDonald, Glenn. “A History of Video Game Music.” GameSpot, 28 Mar. 2005, 4:44 PST, www.gamespot.com/articles/a-history-of-video-game-music/1100-6092391/.

O’Bannon, Ricky. “The Musical DNA of Video Game Music.” Baltimore Symphony Orchestra, 2017, www.bsomusic.org/stories/the-musical-dna-of-video-game-music/.

Seabrook, Andrea. “The Evolution of Video Game Music.” NPR All Things Considered, 13 Apr. 2008, www.npr.org/templates/story/story.php?storyId=89565567. Accessed 11 Sept. 2020.

Williamson, Jason. “The Lasting Impact of Nobuo Uematsu and the Music of Final Fantasy.” The Line of Best Fit, 6 July 2020, www.thelineofbestfit.com/features/articles/nobuo-uematsu-music-final-fantasy.