To start this new Waveform project I decided to inspire myself a little bit by the song Magic by Coldplay that we listened to and analyzed in class. Meaning, I wanted to work on adding different layers of songs on top of each other to create a sort of full sound. Although I wasn’t trying to build the song the same way in the same slow build Coldplay did (because I felt like I had already kind of done that for the first project), I wanted to reach that same high in the song they did (encompassing a large amount of frequencies).
I started my project by thinking about how I could construct multiple layers of tracks to find what I was looking for without getting out of control and ending up with a project that pounded disorganized and out of control. Listening to Magic I realized that it was best to keep the tracks as simple as possible. I tried to use fairly simple synth sounds and melodies (in comparison to the exotic laser sounds I used in my previous project). I also wanted to incorporate more vocals. Although I have a terrible voice and have no confidence in my singing ability, it is luckily very easy to find loops of vocals.
Loops and Samples
I started off this project by looking for different loops and samples online on Looperman.com. I found this artist who made two different vocal loops that were part of the same project of his and I thought they went very well together. I added a third Vocal loop to form my music production software to add a new lawyer to the vocal group (the third vocal is a person saying hey). In addition, all the drum and percussion sounds were mostly loops I made using a different music production software. It is where I made my hi-hats loop, drum, snare and kick loops, as well as my synth loops, etc… For the hi-hats loop I worked on varying the volume of the different hits to give it a more natural feel. In certain parts I experimented with the volume to give a rising feel. I kept the snare loop quite simple so that it wouldn’t overpower the hi-hats I wanted to keep emphasis on.
For the kicks, I had a total of 3 different tracks (with three individual kick sounds). The first and second kick loop were exactly the same beat sequence, but played with different kick sounds. The first kick which I liked and was the one I wanted to use didn’t pack enough of a punch in the lower frequency range (30 to 100 Hz range) so I used a second kick to fill in those frequencies. The third kick I used when I wanted to fill the lower frequencies even more. The third kick is the heaviest of the three and carried the lower frequencies sounds when I layered all 3 kicks together. It did have some distortion that I liked a lot but didn’t work in the context of my project and ended up getting rid of most of it. I did keep a little distortion though.
In my project I had 5 different synths, 1 of them being a Subtractive synth, and all the others being loops I made. When I created my Subtractive synth I immediately knew what other synths (from the software on my computer) I wanted to be added and how I would fit them into my project. Because they worked so well I ended up using a third synth that would lead into the “chorus” parts of my project and a fourth one as a lower bouncy and wonky type of synth in the “verse” parts of my song.
The cooing sound in the project is a bird sound I found on Looperman.com that I decided to add because I loved it so much. I used it mostly as a pre-drop sort of sound that would lead into the “chorus”
MIDI
For my MIDI tracks I wanted to use two completely different sounds in my project to show how Subtractive could be used in a variety of ways. My first MIDI track was a Subtractive synth called Dark Drone; a synth that when used in the higher octaves gave a pretty mellow easy going feel which I thought worked will with the hard hitting kicks I wanted to use in my project. I used EQ to bump up the range of frequencies from 30 to 2k to emphasize the droning of the synth. I used a simple melody: C1 / G1 / C2 / G1 / C2 / G1 and then C1 / G1 / C2 / G1 / C2 / D2# because it went very well with the vocal loops I found.
My second MIDI sound was a sort of sweeping sound (not sure how to describe it) called Thunder. It’s as simple as it sounds, the synth sounds like Thunder. I used this as a sound at the very beginning of each “chorus” kind of like an entrance to what comes next. I also used it as the outro of the song. After the buildup of the project, all instruments stop abruptly, and then I have the Thunder sound to lead out of the song.
Plugins and Mixing
The main plugin used as usual is Reverb. As a wise man once said “You can never use too much Reverb” (Prof. Scott Peteresen), and I have lived by those words in this project. I have Reverb in most of my tracks (I am pretty sure all but some of the kicks), in fact I sometimes put Reverb twice to see if anything cool would happen (cool things did happen).
Examples of settings I used for Reverb
I had EQ on all my tracks in order to emphasize the different particular frequencies I wanted, as that is where I struggled in the first project. I made sure to emphasize the higher frequencies for the vocals and the lower frequencies for the kicks. For the synths it was more complicated because I had to adjust the EQ according to what kind of Synth I was working with. On some of the Synths I wanted a deeper feel (30-100 Hz) and on either I wanted a more mid to high frequency range (700-2k Hz).
I also used the chorus plugin on a few of my tracks as well as a few “special” plugins such as vibrato on the bird sound in my project to give it a more electronic
Automation
I didn’t really use much automation. For some reason I felt like my project worked without a lot of it (this might be a grave mistake). However, I did use some panning for some of my drum sounds and vocals. The drum track panning is fairly obvious, we can hear it panning from right to left, almost as if it’s spinning around your head; on the other hand, the vocal panning is less obvious. I used slight panning on the vocal track where we can hear someone saying “Yeah”. I used slight panning that diminished over time with the beat delay also affecting the “Yeah”. I put one of the other prominent vocal tracks on the other side of the panning spectrum to create a sort of conversation feel to the vocals.
Errors and Obstacles
The main obstacle was having to deal with my project being deleted on day 3 of work. After putting together the vocals and kicks of my project, I realized that I didn’t have the Subtractive plugin in my Waveform which made me realize I had been using Waveform Free this entire time. When trying to unlock Waveform Pro I came across a few problems which ultimately led to my project being deleted and having to start again. Luckily I had my ideas and loops ready and I was able to recreate what I had lost. Unfortunately, it did take me a lot of work and time to recreate all the EQ and plugin changes that came along with the work I had previously done.
Another problem is that my final project was taking up a lot of CPU usage when it would play in Waveform and it would sometimes just freeze or lag. I realized the best way to go about this was to not put some of the same plugins on individual tracks, but rather send the tracks to a Send Bus and add the plugins there. This helped a lot and my project wouldn’t freeze or lag anymore
One of the main obstacles during this project was creating the MIDI tracks. I found it very hard to effectively use my keyboard as keys and have the notes I wanted play at the accurate time. It wasn’t until after I had finished the project that I realized I could transfer MIDI clips from other DAWs which could have been a lot easier and would have allowed me to use a lot more MIDI in my project
For my final project, I was inspired to construct an E-Z music making machine. For drummers like me, music with notes can be hard. I can wrap my head around simple roman numeral chord progressions, but when I want to make music live, I find my process of mapping harmonies to actual notes or chords on the piano to be prohibitively slow. By the time I can actually figure out where to put my fingers, I’ve already lost my creative train of thought. So, instead of going through any semblance of real musical training, why not engineer a solution to make being a musician a little easier? With the seedlings of an idea, I started to sketch out some concept designs. Here are a couple (out of many):
Initial design ideas
The main interface I converged on centered around an array of buttons that would allow me to select different diatonic chords of a certain key based on their root. I’d organize the buttons by ascending scale degree—instead of note name—so I’d be able to think in roman numeral harmonies as I played. Together with some knobs and sliders to tune different parameters, as well as two screens to display different instrument choices, the current key, and the current tempo, most of the interfaces I’d design looked were organized something like the following:
Main interface
I figured that this sort of instrument would best lend itself towards being more of a groove-maker or a drums&bass backtrack, so I started designing with that goal in mind. I also had several ideas for adding controls for some sort of lead voice (flex resistors, gyro sensors, a 12-channel capacitive sensor wired to an assortment of musical fruits), but I figured I’d save them for a later revision. I also realized I’d need a project name. “Beatboard” sounded nice to me, and since then I’ve stuck with it.
Choosing the Technology
Now that I had a somewhat fleshed-out idea of what I wanted to make, I started thinking about how I’d want to implement it. One important detail is that I wanted the box to be completely independent: plug in a speaker or a pair of headphones and play, no computers attached. This pretty much ruled out Arduino as the main brains of the machine, as implementing sound synthesis in bare-bones C on an 8-bit micro-controller with only a few KB of program memory sounded like a nightmare. Instead, I turned to Raspberry Pi. Pis are miniature single-board linux computers that can fit in the palm of your hand. They’re cheap ($10 for the model I’m using), accessible, and fit my project specifications very well: with an assortment of GPIO (General Purpose In-Out) pins, I’d be able control the hardware I needed, and as a computer running a full operating system, I’d be able to run our beloved SuperCollider as a synthesis engine. I’d also been wanting to become more comfortable with Raspberry Pis for a while, and this seemed like the perfect opportunity to do so.
With the project platform decided, I started organizing a general block diagram of the different components I’d need. Here’s an early draft I drew:
Block diagram
These groups components deserve some individualized attention, so I’ll go into a bit more detail about each one.
Hardware Components
The Raspberry Pi
Raspberry Pi Zero, with Quarter for size
I’d decided that I wanted to use Raspberry Pi for this project, but that still left me with the decision of which Raspberry Pi to choose. Just like Arduinos, Pis come in a variety of sizes and capabilities. Since I wanted this project to eventually fit in a small handheld box, I went with the Raspberry Pi zero. It’s quite tiny, and although it lacks a bit in processing power compared to its older brothers (more on that later), its still equipped with the same 40-pin GPIO as the larger models. You can get a Pi Zero for as little as $5, but I shelled out an extra $5 to get the version with WiFi built in—which came in handy as I was rapidly beaming it code updates from my laptop.
While Raspberry Pis are surrounded by a wonderful community of makers supplying countless documentation pages, tutorials, and help forums, I still had some trouble with some of the initial setup. After flashing a desktop-free, lite OS to the SD card (as I’d just be running the thing headless), I spent an entire day figuring out how to configure the Pi to automatically connect to my WiFi without hooking it up to a screen and keyboard. Several hours of pain and a few hundred Chrome tabs later, I found myself on the other end with a sufficiently self-initializing computer and a far more intimate understanding of linux network device setup than I had ever really wanted. After some extra hassle enabling kernel modules for the various hardware serial buses I was going to use, I had the brains of the project prepared for the time being.
The Pi connected to all its friends. How cute!
Audio Out – The Digital to Analog Converter
Choosing the Pi Zero came with one particular caveat: it has no audio jack! This left me with several options: I could 1) Extract the audio from its miniHDMI port, 2) Use the Pi’s PWM pins and a basic circuit to hack together an audio signal, or 3) hook up a DAC over some serial bus (DACs, or Digital-to-Analog Converters, convert a digital signal where audio is encoded as a stream of bits to an analog signal of varying voltage that can drive a speaker). Option 3 seemed like the the most reliable and easiest to implement, so that’s what I went with. While I could probably have hooked up an off-the-shelf USB audio adapter to the Pi’s single microUSB port, that option would have involved several adapters and felt a bit clunky. Instead, I found this awesome board built specifically for Pi Zeros. It includes a quality DAC and amplifier packaged into a compact PCB that plugs straight into the Pis GPIO header—no wiring necessary. It communicates to the Pi over I²S, a 3-wire serial protocol based on pulse code modulation. This was perfect for my purposes, as I’d soon learn that GPIO pins were a limited resource. I’d need to keep access to the Pi’s pins for the rest of the hardware, so I wouldn’t be taking advantage of its convenient plug-and-play functionality, but by looking at the board’s circuit diagram I was able to figure out which pins the board needed to function, and wired them up manually. Here it is, integrated into the current prototype of the machine:
I2S Audio Board
Analog In – Reading Potentiometers for Knobs and Sliders
Another important feature the Raspberry Pi lacks is an integrated Analog-to-Digital Converter (or ADC). ADCs encode voltages as digital values, and are essential for reading the types of knobs and sliders I would use for the project. These knobs and sliders are all potentiometers, variable resistors which can be hooked up as a simple voltage divider, which yields varying voltage as the potentiometer’s resistance changes. Rotational potentiometers can be used as knobs, and linear potentiometers can be used as sliders. In order to read these voltages, I used a small IC called the MCP3008, which can read 8 separate channels at 10-bit accuracy (encoding a voltage from 0-3.3V as number from 0-1023). This little chip communicates to the Pi over a serial protocol called SPI, a somewhat annoying and antiquated spec that eats up another 4 of my precious GPIO pins. The Pi includes integrated hardware for SPI, however, so communicating with the IC wouldn’t be too much of a pain (no bit banging needed!). In the end, I hooked up my 8 analog channels to five knobs, one slider, and the two axes of a joystick that I’d use to navigate beatboard’s menu. Here’s the chip, wired up to its children:
The MCP3008 ADC
The Screens
Here’s a surprisingly easy one! The twoscreens I chose to use communicate over the lovely I²C protocol (not to be confused with I²S, the sound protocol), which uses just two wires for data. The best part of this protocol is that practically limitless devices can be connected in parallel to the same two data lines—each device is given a unique 7 bit address that the host device can target messages to. This makes wiring a breeze, and saves on the GPIO pin usage! As an added bonus, both screens are OLEDs, giving them low power consumption and high contrast, making them easily readable even at low resolutions. Here are the two screens with some text on them. The left screen has the main instrument selection menu, while the right screen has some general system info. I didn’t have time to implement the right screen in the program, but I intend it to display the current key and tempo.
Monochrome OLED screens, with 128x64px (left) and 128x32px (right) resolutions
The Button Matrix
I’d designed a board that required an array of 21 (3×7) buttons, but I certainly didn’t have 21 GPIO pins left to read them. A common way to circumvent this issue is to wire the buttons in a matrix, connecting all of one side of the switches together in columns, and all of the other sides together in rows. That way, the Raspberry Pi can just read all the buttons with just 3 rows + 7 columns = 10 pins! This setup requires some intricate shenanigans to get a proper readout, however. On a given read cycle, the column pins are all set to inputs, while each row is sequentially set — bah! Too complicated to implement, here’s a python library that does it for us.
An early test prototype of the button matrix. Some button matrix designs require diodes for a proper readout, but the Pi can trigger its IO pins fast enough that they aren’t required.A closer look at the button matrix in the current prototype
Putting it All Together
For my first prototype, I wired everything on a large breadboard just to make sure all the components would work together nicely. I’m currently powering it all from the 5V rail of my benchtop DC power supply, which is a convenient—albeit temporary—solution. While I definitely plan to solder everything together and put it in a fancy box with its own battery later down the road, this revision of the hardware was all I had time for in the scope of a couple week’s work. With a few of GPIO pins left over, I hooked up the switch that closes when you click the joystick down as a “menu select” button, and called it a day. I even still have enough pins for an extra row in the button matrix, or an entire second MCP3008 ADC! (Multiple SPI devices can share 3 of the data pins, but each requires its own “chip select” pin to specify who the host is talking to).
The result is the first prototype of Beatboard’s hardware, pictured at the top of this blog post. Here’s my current working wiring diagram:
My working wiring digram, drawn over a printout of the Pi’s GPIO header
The Software
With the hardware complete, I was left with an entire second world of trouble: actually programming the system to do something. I’d never done any hardware interfacing with Raspberry Pi, so I started with the basics:
I was running the pi entirely headless, so after configuring it to automatically connecting to my home network, I’d ssh into it from my laptop for control. I found this to be much smoother than dealing with usb cables and the serial debug window required to work with Arduino. As it comes to the code it self, this project was written in Python (for hardware control) and SuperCollider 3 (for audio synthesis), along with several bash scripts for startup/automation. I kept everything in a GitHub repo, and would pull code to the Pi to execute after pushing changes from my laptop. This was a pretty convenient and easy to implement system, the only downside of which were abundant single-line commits every time I needed to fix a syntax error. (Most of the libraries I was using would only run on the Pi, so I couldn’t do any local tests beforehand).
A screenshot of my typical code upload workflow: SSH terminal on the left, GitKraken on the right
Environment Setup on the Pi
Setting up the dev environment on the Pi was mostly straightforward. Aside from some smaller details, the main challenge was setting up the system audio to work with my I²S DAC. Fortunately, Adafruit, the company I bought the board from included a helpful install script that only required some minor tweaking to get everything working smoothly. Oh yeah, aside from installing SuperCollider. That deserves its own section.
Installing SuperCollider
I’d gotten this far into the project without bothering to check if Supercollider would actually run on my Pi Zero. Genius. I was overjoyed to realize that SuperCollider installation on linux is notoriously difficult (according to this blog post by someone very special). In most cases, you have to build from source, which according to documentation on the SC3 GitHub takes multiple hours on the Pi Zero. Lovely. Luckily, the Raspberry Pi community comes to the rescue! User redFrick has published compiled standalone downloads for SC3 along with step-by-step installation instructions for every single Pi model. redFrick: I don’t know who you are, but I love you. The installation process came with only a few typical snags—my jack2 install conflicting with a apt package that came preinstalled on the system, and some audio bus conflicts with some of superfluous programs installed by Adafruit’s I²S DAC helper script to name a few. In the end, I had sclang and scsynth running on my Pi zero, with audio properly routing through I²S! Never in my life have I been more happy to hear a 440Hz sin wave.
Hardware Control withPython
All the button IO, SPI/I²C serial communication, screen drawing, and menu logic were implemented in Python. Most of the code was centered around Adafruit’s CircuitPython environment, which provides a set of hardware interfacing libraries that are a lot more convenient than the default RPi.GPIO library. All the code can be viewed in the project GitHub repository. (/python/beatboard.py would be a good place to start, if you’d like to take a look).
Python development environment. I used VSCode for everything except SuperCollider.
Audio Synthesis with SuperCollider
This project led me towards a lot of the parts of SC3 intended for live coding, which were super cool to dive into. I have an incredible appreciation for this language—I thought it was rather tedious at first, but I’m starting to really understand how powerful it is as a music creation tool. After a lot of internet searching, I became very close friends with Pdefs, Pdefn, and PatternProxy—all classes I hadn’t touched for any of the psets. On the macro scale, my SuperCollider script defines a lot of SynthDefs and a handful of different patterns, then plays them according to several control parameters that the script exposes to the network (more on that last part in just a moment). The SuperCollider portion of this project is actually still a bit rough around the edges, and I’m looking forward to optimizing and expanding a lot of the code. Several ways I’m looking forward to improve are: 1) saving large libraries SynthDefs to libraries of files on the Pi, 2) Unifying pattern implementation (PatternProxy vs Pdefn for live control? Pdef vs global variables? I’m still figuring out both what’s available and what I like the best). 3) Organizing a more thoughtful control API for other programs to interact.
A screenshot of some SuperCollider code
Bridging the gap between Python and SC3: Open Sound Control
One important task remained: I had to figure out how to get my Python and SuperCollider scripts to actually talk to each other. Luckily (and honestly, to my absolute surprise) there’s a standard communication protocol that SuperCollider uses called Open Sound Control (OSC). Open sound control messages can be sent over a simple UDP connection, and consist of a “target path”—specifying what should be a manipulated—followed by a list of arguments—specifying how the target should be manipulated. SuperCollider actually uses OSC to talk between sclang and scsynth. For my purposes however, I just needed to communicate to sclang, which can be achieved with an OSCdef, which allows you to bind a handler function to a specific OSC target. In the screenshot above, you can see my OSCdef for responding to messages that want to alter the pattern of Beatboard’s bass instrument. One annoying limitation I encountered is that SuperCollider doesn’t support sending arrays via OSC. As a hotfix, I decided just to send the root of each chord instead of a list of notes, but I’d like to come up with a better solution in the future. Meanwhile, on the Python side of things, sending OSC messages was super easy with the pyliblo library.
As an extra note, I looked through several libraries that try to essentially try to replace sclang entirely, though I found that for my purposes they were either too tedious, outdated, or simply unnecessary for this project. This project deserves special mention, though. It seems really cool.
Trouble In Paradise: scsynth Processing Issues!
As I was nearing my goals for this revision of the project, I hit a serious design flaw that I hadn’t anticipated. Long story short, the Pi Zero isn’t powerful enough to run a heavy load of synths and patterns on the SuperCollider server. It can run a simple drum pattern fine, but as soon as I’d throw in a bassline with several PatternProxies, the server would fall behind and the entire system would crash—even causing lag in my ssh terminal. Here’s a video of me demoing my SC3 script by manually sending it OSC messages from the Python console. The drums all work fine, but as soon as I layer on the bass, the whole thing lags and crashes. In the end, I came up with a simple, yet somewhat disappointing solution to this problem. In the current prototype, I run sclang and scsynth on my laptop, and let the Pi zero communicate to them over OSC. In the code, this was a super easy fix, as I just had to target my OSC messages to my laptop’s IP instead of localhost. However, this definitely broke on my goal of making a standalone device. In the future, I’m interested to see if the more powerful Pi’s could do a better job running things, and if optimizing my SC3 code could get things to run on the Pi Zero (PatternProxy seemed like the main culprit, while Pdefn was fine).
With easily over 60 hours sunk into this project, I can definitely say that it taught me a ton. I’m a lot more comfortable with Raspberry Pi’s, I brushed up on my python, and I got to dive even deeper in SuperCollider. Overall, making the first prototype of Beatboard has been loads of fun, and I can’t wait to keep going with it. In particular, I’m looking forward to adding more instruments and patterns, adding more complex parameters for the knobs (did I hear user mappable?), and implementing seventh chords and secondary dominants to the extra button rows. For my next prototype, I’d also like to put everything in a nice box, and add some sort of lead instrument control. (Those musical fruits do sound like a lot of fun…)
For the last project of the year, I wanted to make a project that would be the culmination of all we have learned in the first-year seminar. I knew right away that I wanted to make a Waveform project because those are the projects I enjoyed the most making. I first needed to find a way to put together everything we’ve learned in order to make this project work. Recording and mixing would come naturally with the project, but incorporating SuperCollider less so.
I knew right away I wanted to make a very upbeat and fast moving music project, this needed to be the celebration of a great first-year seminar, not a sad goodbye. I needed this project to have more energy than the last two and provide some “good vibes” as one would say. Now I just needed to figure out how to put an abstract idea such as “good vibes” into notes, frequencies, sounds, etc…
Loops and Sample
I first needed to do a little bit of research on Spotify. I clicked on my happy songs playlist and narrowed down the “good vibes” into some tangible variables. I noticed that most songs have at least a simple catchy melody and some hard hitting drums (especially some good snares and hard hitting claps). I probably missed a lot of other key criteria but those two two things popped out the most to me. I started by looking through different synth noises in an attempt to find something I could use. I originally wanted to use a synth sound from a loop I found on Looperman.com, but ultimately decided that I would rather make my own. Most of the Subtractive synths didn’t feel like the right fit for the project I had in mind so I ended up making my own samples with a plugin synth from my music making software (another DAW). I made 4 different samples but used the same sets of chords, and just made some notes more staccato to add some punch.
As usual, I decided to make my own drum beats and percussive sounds because I am not a big fan of drum samples. I like to make the drum loops and samples myself (weather that’s using MIDI or a beat sequencer) mostly because I give importance to the beat over the melody. I tried in this project to keep the idea of using some powerful snares and claps, so I ended up using fewer closed Hi-Hats. I ended up with 3 different snares that I switched around in the project depending on when the chorus or verses played. As per usual, the bulk of my drum work was in the kicks. I had a total of 4 different samples, all dominated in different frequencies so that putting them together in the chorus wouldn’t overwhelm the (30 to 100 Hz range). Although I didn’t have too many closed Hi-Hat samples, I did have a few open Hi-Hat samples (with only a few hits) as well as some rides that I may have overused at certain points, but I thought (and still think) it actually worked quite well.
The samples I ended up taking for my project were that of frogs croaking as well as a ribbit sound from the website orangefreesounds.com:
To have some fun, I decided to throw in a curve ball in the beginning of the project. I started the project very slow with what sounded like some sort of soundscape and then I interrupted it quickly to start the upbeat music I had prepared. In addition I did use a few sweeping samples I found on Looperman.com to lead into or out of the chorus in the beginning.
Recordings
There aren’t that many recordings in my project, but there are enough so that they cannot be ignored. I have no faith in my singing ability, so I did not create any long audio clips, but I had just listened to Kid Cudi’s new album and I was inspired by the ad libs in his song Sad People. Although it is quite ironic that I am taking inspiration from a sad song to make a happy one, I really wanted to have some fun making some adlibs in my project. I sat in front of my Iphone microphone saying random things and making noises, only to be displeased with almost everything. After a few more attempts I was done and had a few usable audio clips.
The first recording is that of me saying “nah…nah nah nah nah, we gotta change this up” which I used as the transition from my intro into the main part of my project:
I originally wanted to have me say “yeah” and then have the switch after that, but I thought it would be more entertaining to have me talk a little more in the beginning and use the “yeah” later. I think it made for a better transition overall, because I was able to have a moment where all the music stopped while I was talking. The second recording is of me saying “yeah” which I changed the pitch by 2 semitones to create a deep voice.
Because I was recording with an Iphone microphone there was a lot of background noise getting in the way. After doing some research online I found out that I could use audacity (which I luckily already had downloaded) to reduce the unwanted noise. This actually helped a lot in making my recordings smoother and sound a lot better.
MIDI
I used Subtractive to create some sound effects in my project, and created two different synth sounds in the intro to give a strange effect. I again used Subtractive to make a bassline synth during the chorus, even though I don’t really like using subtractive as a plugin to make the melodies for my project. I always seem to use them more for interesting sound effects. One of the synths used was called Mordor 2 and I thought it fit perfectly in the strange intro sequence with all the croaking frogs as well as the weird bowed strings. The synth used for the baseline in the chorus was simply a soft background synth that complimented the chords of the melody. Originally I thought of adding an extra layer of synth in the higher frequencies (2k to 3k Hz range), but I ended up thinking it was too much and it ruined the overall feel of the song.
SuperCollider
SuperCollider was the spark to the great idea of having a strange intro to the song. When I was looking through the different sounds and came across the class BrownNoise.ar (from superclass WhiteNoise.ar) which generates noise with a spectrum that falls off in power by 6 dB per Octave (example of this class being used):
Using that with OnePole.ar (example of this class being used):
A low pass filter and a resonant high pass filter (example of this class being used with SinOsc):
I was able to obtain a sort of bubbling sound:
Plugins and Mixing
I didn’t go too crazy with the plugins and kept them quite simple. This is about the most I put on a single track:
I put a few extra plugins on my Subtractive Synths, however, I mostly kept it simple for the rest. I stuck to simple plugins such as EQ and Reverb. I added a little of Reverb to certain tracks but kept it to a minimum. I tried experimenting with reverb for this project, but I felt like it altered the choppy feel of the main synth at times and didn’t line up well with my recordings (that sounded better with low reverb). Because of this I mostly stuck to using a very low reverb:
As always, I used the EQ to fix some errors with the frequencies of certain samples and sounds. For example, I needed to cut some of the higher frequencies (80Hz to 100Hz) for one of the kicks so it wouldn’t overlap too much with another. I had to do some of the same changes for the snares as well. I did use the low pass and high pass filter plugins at certain points, but the EQ often helped enough. The one mistake I did make in the beginning, was not realizing one of the Reverb plugins I had in my project actually came from another DAW on my computer. Luckily I realized in time and I replaced it in no time.
Automation
There was more automation in my project this time than the last. I used panning for the intro as well as a little bit for the chorus, but it isn’t very strong. There is also some slight panning to the right when there is the “yeah” adlib, but it is the only recording I added any panning to. I did not use my signature ping pong effect panning. I mostly used automation to control the volume level. For example, the very sharp cutoff of the intro, or how the music fades out in the very end.
Errors and technical difficulties
This time I had far fewer technical difficulties than the last time. I only had one big one. My Waveform project crashed while I was working on it and when I reopened it half of the work I had done was gone. Some tracks were deleted, some automation, and a lot of plugins. I did save my work prior to the crash. I always get scared of things like this so I save my project after I make even the slightest change, but I think the problem had to do with my computer. My computer is old and it has almost ran out of storage so I believe it was a problem on my end and it had not much to do with Waveform itself. I was able to put everything back together, but it took a while and it was a pain. In the end not much else went wrong so it wasn’t terrible. My mac kept sending me this:
I decided to make my final project in Tracktion Waveform 11. My piece for my final project is titled “Time Floats By”!
I wanted my final project to be somewhat of a companion piece to my Homework 3 submission. While my Homework 3 submission, “Revolve Around You”, revolved around the theme of space, I wanted this piece to instead focus on time instead of space.
I wanted this piece to be indicative and evocative of the passing of time. So, I made my piece in 120 BPM (a multiple of 60 BPM)—with 2 beats being exactly one second—so I could use the ticking of a clock and the passing of seconds as the driving beat. I used a sound sample of the ticking and swinging of the pendulum of a large wall clock (like a grandfather clock) as a snare drum sound (https://freesound.org/people/straget/sounds/405423/), and used a sound sample of the ticking of a kitchen timer as a hi-hat (https://freesound.org/people/maphill/sounds/204103/).
My piece’s Waveform workspace layout—(only 13 tracks this time!)
In general, another major theme I tried to also convey with this piece is the subtle oddities in the way time flows (and how we perceive it). I had fun with slowing down time through the somewhat atonal clock-slowing sample “Time Slow Down” (https://freesound.org/people/PatrickLieberkind/sounds/392408/) and playing with the listener’s expectations while still keeping that ticking driving force. That sample, “Time Slow Down”, still had some pitches to it, so I used Pitch shifter automation (from –0 to –8.3 semitones) so the last “pitch” played lands on the tonic of the minor key of the piece. I’m a big fan of odd time signatures—especially uses of odd time signatures in ways that are able to sound natural even to those not well-versed in odd time-signatures. So, while a majority of the piece is in 4/4, I wanted to have a bridge section in 7/8. …However, one of the skills I learned is it’s important when working in odd time signatures not to disorient the listener too much. Thus, so the transition to the bridge (both the different melody and the 7/8 time signature) isn’t super abrupt and jarring to the listener, I put the first two measures of the bridge melody into 4/4, before going into 7/8.
While the piece is in a minor key (G minor), I used the G major chord as an unexpected first chord of the piece and as a major part (pun intended) of the chord progression. I used a Subtractive synth to create a reverse-reverb sounding pickup going into the first appearance of the synth melody before it comes in.
One of the things I wanted to work on with this piece was streamlining my process a lot more. I Something I struggled with on my previous two Waveform assignments that I wanted to accomplish here was not having more tracks than I needed to. On my Homework 3 submission—which I am proud of—I had a wild 82 tracks (actually, in the file there were 117 tracks, but 35 of them weren’t in use and only contain previous recordings I didn’t use), making for a 6-and-a-half minute piece overall. I did slightly better in accomplishing this on Homework 4 (which I’m even more proud of: a MIDI track inspired by the Black Lives Matter movement and racial equality protests of 2020). Here, I wanted to stay more streamlined, and I committed myself to not going overboard this time! I took this as an opportunity to learn to cut stuff, instead of drowning an already good piece in filters and additional tracks of diminishing returns. I’m happy that I was able to overcome this, make a piece with only 13 tracks, show what I’ve learned, and still make a piece of music I’m really proud of!
Thank you so much for an incredible and unique first semester; it went so quickly (I guess you could say time has really flown by)!
Here’s my final piece, “Time Floats By”:
List of Tracks and their Different Instruments, Filters, and Plugins, etc.:
there was a bit of ringing and resonance at around 1400 Hz, so I used AUGraphicEQ to turn down the 1.2k Hz bar (but not the 1.6k Hz bar, as doing that also changed the shape of the sound)
“drums”
Drum Sampler: a modified 808
+ reverb
+ lowpass filter
“melody 1”
4OSC Basic Lead 2
panned slightly Left
+ reverb
+ lowpass filter
“melody 2”
a modified 4OSC Basic Lead 3
panned slightly Right (harmonizing the above in a different part of the soundspace)
+ reverb
+ lowpass filter
“Subtractive pickup”
a modified Subtractive MINI BASS
+ lowpass filter automation (sweeping from 231 Hz to 22,000 Hz upper limit)
“padsynth 1”
a modified 4OSC Basic Poly
+ reverb
+ lowpass filter
“bassynth 1”
4OSC Pick Bass WMF, modified to be able to play multiple notes at once (“Poly” setting)
+ Pitch shifter automation, from –0 to –8.3 semitones (so the last “pitch” this semi-atonal sample lands on is the tonic of the minor key of the piece)
“BUS: clock”
aux bus of “grandfather clock” and “gfather block SLOW” tracks
aux bus: (aux return)
+ reverb
CREATION and Process LOG: A Mostly-complete Log of my general process:
Version 1:
exactly 120 BPM, so that each two beats are exactly a second (the passing of time)
in 4/4, in G minor. Might have a 7/8 bridge later on down the line
bass, melody1, and melody2 for now
Version 2:
made the bassynth’s instrument a modified “4OSC Pick Bass WMF”, fiddled with it
tried to change the resonance on melody1 and melody2 to make them sound different
modified melody1 and melody2’s instrument “4OSC Basic Lead 2” to take multiple notes at once (“Poly” setting)
harmonies leading up to m.41
idea for a section that feels like it’s moving in half-time at m.41 or m.49
Version 3:
creating a section that feels like it’s moving in half-time at m.41 or m.49
creating a 7/8 bridge
Version 4:
actually, changing that moment to technically being 7/4 so Waveform doesn’t double the speed of things copied
made the first quarter of the 7/8 part into regular 4/4 measures so the listener can adjust to the new section
Version 5:
some drums on the 7/8 part to help the listener keep the beat
there was a bit of ringing and resonance at around 1400 Hz, so I used AUGraphicEQ to turn down the 1.2k Hz bar (but not the 1.6k Hz bar, as doing that also changed the shape of the sound)
Version 7:
Added a half-speed version of the grandfather clock (with Elastique (Pro))
Added lowpass frequency automation
Version 8:
Working on an ending
Version 9:
Working on an ending
And ending the whole thing with two measures of the clock ticking at the end (matching up with the beginning)
Version 10:
Removing blank tracks
Added a reverse-reverb-sounding Subtractive synth as a pickup to the first appearance of the main synth melody
I’ve always been a gamer, starting with games on the Wii and moving on to mobile games and games like Minecraft. Over quarantine, I became obsessed with the newly released tactical shooter game Valorant, and so decided to make a sound effect pack for a first-person shooter video game.
Pistol:
The first sound I decided to make was the pistol, which is iconic since every person gets the pistol in the game. To create a pistol sound, I decided to model it off of a 9mm pistol: https://www.youtube.com/watch?v=HwxeLrWymrI. This sound was actually a hard one to make since it was the first one that I attempted. I wanted to have a “pow” sort of sound but didn’t know how to get that nice pop of the shot. So, I stuck a sample of a pistol shot into audacity and analyzed the frequencies and the different parts.
I noticed that there were three parts that made up the gunshot sound, which seemed to have different frequencies. So, I used a filter on white noise for each of the parts and combined them together. To get the parts to play in sequence, I used the fork operation. The first attempt didn’t sound very great, so I made a new version that used reverb and added a kick to the beginning of the shot to provided a sense of power.
Lasergun:
The lasergun sound was not hard to do. I knew that I wanted to make a “Peeew Peeew Peeew” sound, and that was clearly modulation with a decreasing frequency. So, I did frequency modulation by having a line envelope going from 1400 to 100 as the frequency. For the generator, I chose to use a Saw wave because that sounded the most electronic. In the end, my lasergun sound reminded me of the old Galaga game laser.
Footsteps:
Footsteps being an important part of any 3D game where people move in the world, I decided to create this sound next. This was something surprisingly hard to do. I initially wanted to create the stereotypical footsteps that you hear when you think of footsteps in a haunted house (here: https://youtu.be/9g7uukgq0Fc?t=40). However, that is something insanely hard to do since the noise isn’t regular. While trying to create that, I used brown noise with a lowpass filter along with a percussive envelope. However, that didn’t at all sound like what I was going for. Instead, it sounded like stepping in snow, so I fine-tuned it a bit and made snow footstep sounds first.
Snow:
*Disclaimer, I’ve lived in Southern California and don’t really step in snow, so if this sounds off ooPs
Next, since there are often desert environments in games, I worked on creating footsteps in sand based on this video (https://www.youtube.com/watch?v=Lg2zxeSF7WM). This is when I came to a key realization. Each footstep isn’t a single sound but has two distinct parts. The first sound is made when your heel hits the ground and makes a thump, and the second closely follows and is when the front part of your foot flaps and hits the ground (you can verify this, try walking). Using this new insight, I shaped white noise to first start off with a lower frequency percussive sound, and after that put a larger frequency range with a customized envelope to better imitate the fluidity of sand.
Finally, I decided to try and create the clean wooden floor sound that I was going for in the beginning. To do that, I took a similar approach as how I created the pistol sound, in which I listened to each sound separately and compared it to my own. My closest rendition of the footsteps sounded more like someone stomping on a wooden plank.
Jump:
Jumping is an important part of FPS games to get over different terrain and to dodge projectiles. The sound that I went for is different from the regular 8-bit jump that we had an example for in class. Instead, I went for a more realistic jump. There were two parts to this: jumping off of the ground, and landing back on the ground. While designing the lifting-off portion, I knew that the sound had to rapidly decrease in volume and also be quiet since there isn’t much movement against the ground when jumping (since it’s more about the leg coiling up which doesn’t make noise). Also, I applied a Low-pass filter to the noise so that it would decrease in the max frequency since less of the foot/shoe is in contact with the ground to make high pressure/frequency noise.
For the sound when landing on the ground, it was relatively straightforward, since it was just a thump sound, which I accomplished by having low frequency sounds shaped by a percussive envelope.
Shotgun:
The shotgun is a classic weapon in any game with guns, and I decided to make a shotgun with buckshot (since slugs aren’t very common in shooter games), and to do that, my goal was to somehow simulate all the pellets being shot out and flying through the air. To do that, I decided to have a lot of pistol shots played slightly delayed from each other. My rationale was that staggering the sounds would provide the punch feeling when shooting a shotgun. To play the sound staggered, I took inspiration from the “river” sound effect that we went over in class. Instead of using .collect though, I just used .do and played 6 pistol sound effects next to each other. Also, I moved the kick to the beginning outside of the .do loop since the shotgun should have one initial “thumpy” blast.
I have to warn you, the shotgun sound is a bit shocking and frightening when you first hear it, so be prepared. When I played this sound to my sister with headphones, you could see when the sound played from the way she jolted, but I mean if you are shot at by a shotgun in-game it is quite frightening.
Fire:
Fire is a common element of games since it directly represents damage. The first fire sound that I made is the typical one that will be used in games, with just the sound of moving air cause by the fire. I created it using Brown Noise as the base, and amplitude modulated it with LFNoise 1.
This first sound can be used for molotov cocktail burning, a flamethrower, or also maybe a wizard’s fiery hands.
The second fire sound that I created is the sound of a wooden fire, where wooden logs are burning. Even though this may be less important in a shooter game, I find it to be the sound effect I am the proudest of. For this sound, I took inspiration from the fire sound effect that we went over in class. I wasn’t really satisfied with it, since the popping and cracking sounds sounded really digital and dead. After listening to a lot of fireplace sounds on youtube, I classified the sounds into 4 parts: a drone, some sizzling, a crackle, and a popping sound. The drone is the wind sound, the sizzling is boiling moisture, the popping is like wood and twigs snapping, while the crackle is with a larger air bubble that bursts and makes a loud crack. The most difficult part of this sound was figuring out how to make the popping and the crackling be random. Taking inspiration from the fire sound effect we went over in class, I researched the Dust and EnvGen classes, which allow random triggers to be generated and be used to activate a percussive envelope. I set the rate of crackles to be about once every 5 seconds, and the rate of pops to be about 5 times every second to make a really active fire.
*looking back, the pops sound a bit too percussive…
Dash:
Dashing is one of the most common abilities in games, and my goal was to create a dashing sound highlighting the air moving while having a pitch shift to show speed (since there is the doppler effect). To do this, I layered three sounds on top of each other: an exclusively high pitched wind noise, an exclusively low pitched wind noise, and a frequency modulated sine wave with white noise multiplied by a decreasing line envelope as the parameter to have the “woosh” effect.
Sniper:
What’s a shooter game without a sniper rifle? I decided to model my sniper rifle sound based on this 50-caliber sniper rifle: https://www.youtube.com/watch?v=BB9Oqf2sBZ4. There’s a few parts to this sound: the boom, the chamber moving, and the bullet casing hitting the ground (which I added for fun). For the boom, I took the explosion example we went over in class and adjusted it a bit. For the chamber moving (the clanging sound), I used a bandpass filter on white noise to hone in on a metal resonant frequency. Finally, for the casing coming out, I used an echo effect on a sine oscillator to imitate the casing hitting the ground and bouncing. I also added a kick sound in the beginning to highlight the boom of the weapon.
Rifle:
For this sound, I wanted to imitate the sound of an AK-47 (https://www.youtube.com/watch?v=BU-r0ElUru8). The sound mechanic for this gun is similar to the sniper, so I just got rid of the shell drop sound, and adjusted the explosion to be way shorter. Additionally, I adjusted the tuned frequencies of the sounds to get it to sound right. I also reduced the kick sound since it isn’t as strong as a sniper’s. Also, actually after making this sound, I figured out how to use a for loop to play sounds, and applied it to the previous sounds to have them play automated.
Reload:
Finally, if there are guns, there is bound to be reloading. There are a few parts to the reload that I was able to figure out from this video: https://www.youtube.com/watch?v=oqFmQYNBwcw: clip out, mag in, then sliding the bolt to get another round in the chamber. When finishing this sound, it took a bit of tweaking the different sound combinations to get something convincing.
Reload:
Gunshots+reload:
Reflection:
After creating all these sounds for a shooter video game, the greatest takeaway is that synthesizing sound effects from scratch is quite tedious and frustrating at times. Some sound like a lasergun or wind can definitely be synthesized with no problem, they are simple by nature. However, for other sounds like footsteps or an explosion, because they are so complicated and intricate, it may be better to just record them or create them artificially through foley methods. Nonetheless, I am glad to have been able to synthesize the sounds that I did, and if suppose I ever make a first-person shooter game, I’ll already have most of the sounds I need.
This project began, like all my projects, with a random voice memo. In winter 2019, I was playing the tune of “Greensleeves” on the piano with some minor 13th chords. It’s hard for me to articulate, but the recording (and the “Greensleeves” folk song in general, which was written anonymously around 1580) inhabits an enchanted liminal space between wintery, nighttime, enchanted vibes and a classic, elegant, timeless English feel. The sense of timelessness and spaciousness really gets me.
Voice memo:
I thought of doing “Greensleeves” because my bigger goal with this project was to create an arrangement of a *classic song* (classic in this case just meaning old and well-known) in Waveform. I gave myself the following guidelines:
includes me singing complex stacks of harmony like Jacob Collier, but obviously in my own style
pushes me to craft my own synths in Subtractive and 4OSC
pushes me to avoid existing samples and record found objects, manipulating unconventional sounds to create beats, synths, etc
Once I decided on “Greensleeves” tune, I set my priorities as (1) first, (2) second, and (3) third most important. This was going to be 1) an a cappella arrangement, 2) undergirded by churchy instrumental sounds, and 3) supported by samples, with vocal recording obviously at the center.
Then I had to decide which lyrics to use. I wasn’t going for medieval love song vibes, and the original words of “Greensleeves” are kinda weird. There was a surprising amount of alternative lyrics. I really wanted to go for something universal/secular-ish, but “The bells, the bells, from the steeple above, / Telling us it’s the season of love” just wasn’t cutting it (sorry, Frank Sinatra). I settled on the original Christian adaptation “What Child Is This?” by William Chatterton Dix (1865). (I opted to change “Good Christians, fear, for sinners here / The silent Word is pleading” to the alternate “The end of fear for all who hear / The silent Word is speaking.”)
For me the Christian Christmas story definitely inhabits that mysterious, shadowy, timeless feeling that I was talking about earlier (the wise men following the star, the shepherds out at night). I liked imagining reverbed, dark organ and choir sounds fitting into that space.
Above all, I love using harmony to color things. I sang the vocally-harmonized equivalent of my voice memo above. The Waveform part of this was easy: stack a bunch of audio tracks.
Unmixed voice sketch of piano improv:
But the musical part was hard. I was so focused on intonation, rhythm, and line that first of all I sacrificed the synchronization of consonants (even in the final recording they’re often out of sync) and second, I couldn’t improvise. I don’t have enough vocal skill yet for that kind of fluidity or flexibility. Improvisation and imprecision did become an important generative tool, however, in the routine that I fell into for each verse and chorus:
Brainstorm a list of adjectives to describe the feel of this verse and its dramatic role in the song.
Improvise a harmonization on the piano over and over until I like it and put the midi in Waveform
Quickly sing lines over the harmony that feel natural (e.g., change the chord inversions to create a smoother bassline, since the bottom notes on the piano version probably jumped all over the place)
Change my piano midi accordingly, and take a screenshot of the piano roll as a memory aid for the contour of each vocal part.
Sing vocal parts reasonably well. This was actually the easiest part. Confidence is key. I’m not an amazing singer by any means, but everything sounds at least decent with some brave intentionality. Also, I found that doubling or tripling the bass and melody compensated for minor issues with tuning, breath control, or volume.
Vox:
Piano roll (hard to read, but a good memory aid. Notice that the lines in this one are NOT singable yet lol.. like look at the top line):
Meanwhile, I explored samples and synth sounds that I could use. I tried to record my keychain to get a sprinkley/shimmery sound, but after twiddling for an hour with delay effects and feedback I just couldn’t get the sound I wanted, and used a preexisting sample. Other attempts were more successful: I tapped my stainless steel water bottle with my finger, producing a warm tenor-range tone reminiscent of bells.
I added light reverb, high-shelf-EQ’ed out the high frequencies to reduce the potential for dissonance, and automated pitch-shifting to tune this with the wordless “doodah” thing you heard me sing above (fun fact, my water bottle is exactly a quarter-tone sharp from 440Hz tuning). This effect was very spooky and breathy and therefore I made this section my refrain.
Those bell harmonics were SO COOL, dude (listen close: it’s a major ninth chord in root position, I kid you not).
Original sample:
Pitched, reverbed:
Then there was the organ. Ahhhhh, the organ. I tried to make it myself, I researched crazy additive synthesis stuff with sawtooth waves, and then I discovered that 4OSC had an organ patch that blew my attempts out of the water. Its Achilles’ heel, however, was so darn annoying. The organ patch emanated a high-frequency (2-10kHz) crackle that MOVED AROUND depending on what notes you were playing and got really bad when I played huge organ polychords (which, if you haven’t noticed, is kinda my musical M.O.). My best solution was to automate notch filters that attacked the crackle optimally for each chunk of music. I spent a lot of time deep inside that organ dragging automation lines controlling the cutoff to the elusive median frequency that best subdued the dEmoNic crAcKLe (band name list, Mark?). It also helped in certain sections to automate the Q and gain of various notches and curves, not just for the crackle but also for the overall brightness/darkness that I wanted.
The original organ sound on the final chorus, without EQ (or light reverb/compression):
EQ automation:
Two more very interesting sounds I found. Firstly, in Subtractive, there’s a patch called “Heaven Pipe Organ” that sounds very little like a pipe organ but clearly had the potential essence of a spooky, ghostly, high-frequency vibe to add to my vocals. There was even this weird artifact created by a LFO attached to pitch modulation that caused random flurries of lower-frequency notes to beep and boop. I mostly got rid of it but not entirely, because I was almost going for wind-like chaos. The main things I did were:
Gentle compression
Reverb with bigger size (to stay back in the mix) but quicker decay (I didn’t want the harmonic equivalent of mud 🤷🏼♂️)
Quicker attack on the envelopes (I wanted it to whoosh in, but not like 20 seconds behind the vocal because that would be dumb)
EQ. I wanted those high freqs to really shine:
Honestly, that wasn’t a whole lot of change to produce a dramatically altered sound.
“Heaven Pipe Organ” before:
After:
Finally, bells. Christmas = bells. You might recall my post called “Boys ‘n’ Bells” (or something, idr) about Jonathan Harvey’s Mortuos Plango, Vivos Voco. I was seriously inspired by the way he has a million dope bell sounds surround you octophonically. I couldn’t quite do that, but I found some wonderful church bell samples from Konstanz, Germany, (credit: ChurchBellKonstanz.wav by edsward) that I hard-panned left and right during the epic final chorus and mainly tuned to the bassline:
I also found a nice mini bells sound (the 4OSC “Warm bells” patch), which had good bell harmonics preloaded and just needed slight EQ to reduce some nasty 10kHz buzz. I had it accompany the melody of the final chorus, but my favorite use was a sort of “musical thread” that bounces back between a countermelody and a repeated note (I swear to gosh that there’s a specific term for this technique that I learned about in Counterpoint class… oh well):
As I started recording the verses and choruses and filled in the synths/samples, I started focusing more on the overall flow of the piece. Most of the work was cut out for me. There’s a musical antecedent/consequent (call/response) structure within each verse and chorus. This is clearly reflected in the melody and harmony. They start out feeling like a question and ends on a solid cadence. As the arranger, I played around a lot with this structure. Often I flipped the harmonic structure of verses so that they started out stable and then went on extended forays into instability/modulation in order to create a build into the next section. Here’s an example where the second chorus actually builds into the third verse, which surprises you because, well, you’ll see:
What you just heard is probably my favorite part. The second chorus is embarking on an elaborate quest to escape the key of f#minor. A dramatic descending circle of fifths modulation ends on a classic II-V that we have been *culturally indoctrinated* to expect to go to the I (in this case E major). Even going to the vi would be equally unsurprising (vi, C#min, is closely related to E major… this is a standard deceptive cadence). BUT NO. In a moment of serendipitous experimentation, I accidentally dragged an organ midi recording of Verse 3 right after the Chorus 2 modulation I described. The Verse 3 organ midi happened to be in B minor, and the transition sounded BEAUTIFUL. It has those haunting English vibes, and it works harmonically because the voice leading from B major (V in E) to B minor works even though functionally this is nonstandard.
Here’s what the transition from Chorus 2 to Verse 3 would have sounded like if it was predictable (Bmaj to C#min):
Here, again, is the colorful shift I chose from Bmaj to Bmin:
I’m pretty convinced about the second one., but please let me know which one you think is more dramatic in the comments!
The last element of this project I wanna talk a bit about is mixing. Mixing was hard. Because this is centrally an a cappella project, I put the voice in the center (spatially, spectrally, volume-wise). My first main focus was balance of the vocal stacks good. I spent hours arranging the tracks by color (red=melody, blue=bass, in-between = related/supporting parts), putting each separately-recorded chunk into folders within my vox submix, and getting the levels/panning right in Mixer:
For a while, I wanted the emphasize reverb in the vox, but inevitably that causes the vox to sit back in the mix. It needed to be front & center. So instead, I generally tried to compress the vocals for a close/direct sound and made a valiant effort to keep all the other sounds out of the way spectrally. But sometimes I liked the effect of blending a high-freq pad with the vox (see the Chorus 2 demo above) or giving a kick in the bass (see the use of bells). Speaking of bass: I showed my mix to my friend Liam and he suggested boosting the bass to make the vox richer. I did and it helped. Shoutout to Liam.
So yeah. I think that covers a lot of my decision making! Here is the final product. Enjoy, and HAPPY HOLIDAYS!!
As a movie buff, making movie SFX was the first lightbulb that ticked in my head when I saw “Final Project” pop up in the canvas assignments. My plan was to replicate the fight sounds from the iconic duel in Star Wars Episode V: The Empire Strikes Back (1980). Sound designer Ben Burtt originally used a projector motor hum with the buzz of an old television. Since my name is not Ben, nor is it Burtt, I do not have access to that retro equipment. I did not think it would be a good idea to devote my time trying to find electronic devices to replicate those two sounds. Instead, I dove into SuperCollider! Creating the buzz was actually pretty easy. I just generated a low-frequency sawtooth wave:
The hum required a little more experimenting. In a video by the channel ShanksFX (where he attempts to replicate the Star Wars SFX as well), he uses a square wave generator as a substitute. I tried a square wave in SuperCollider by way of a pulse wave, but I did not like the way it sounded. Instead, after searching around I used Impulse.ar as the hum:
Operation: Analog
After SuperCollider my original plan paralleled that of Burtt and Shanks. They played the hum and buzz together through a speaker. They then waved a shotgun mic in front of the speaker to create the famous doppler swing (vwoom vwoom).
Polar pattern of my Snowball iCE (top) and a standard shotgun mic (bottom)
As you can see a shotgun mic’s polar pattern is narrower, which is an advantage for movies and TV since it can isolate specific sounds like dialogue. Using my Blue Snowball iCE was a disadvantage. No matter how I moved it, the saber hum still sounded the same. The Snowball is just too good for lightsaber foley.
Though I did have a cheap shotgun mic (Rode VideoMicro), I discovered to my horror I had no adapter to plug it into my phone!!!! >:(
I considered recording the sound with the shotgun mic to a camera, yet I figured it would be quite a stretch to extract the audio from the video and retain my samples in pristine condition. I resorted to record with the TwistWave app on my iPhone 11 mic. Still horrible. You know the rumble striking your ears when watching a video from a windy day? The wind pickup was all I could hear, even with a gentle swish and flick across my speaker. I tried using different phones and different speakers to no avail. I had to shift gears. “Operation: Analog” was a failure.
Operation: Digital
Luckily, I did have a fallback plan. The doppler swing could be generated in SuperCollider. I already had the hum and buzz generated, so I just needed the full power to control the synth parameters in real time.
I remembered having too much fun with the cursor control (MouseX, MouseY) UGens, blowing my laptop speakers one too many times.
MouseX controls any parameter when the cursor moves left and right. The first assigned value within the UGen is attained when the cursor is at the left, and the second assigned one is attained at the right. MouseY is the same but for moving the cursor up and down.
For the hum synth, my MouseX controlled a multiplier from 1 to 1.1 for the frequency level. The frequency was also controlled with the linearly varying LFNoise1 UGen at a control rate of 1 from 90 to 91 Hertz. To create the sudden change in db levels for swings, I utilized the MouseY parameter on a multiplier for amplitude. Amplitude also had LFNoise1 but with a higher control rate of 19 to create a “flickering” effect. I added a MouseX control over a bandpass filter’s cutoff frequency to remove some of the “digitized nasal” sound.
In the original films I could hear subtle reverberations in the saber swings, so I added a MouseY to the wet mix level of a FreeVerb UGen.
For the buzz synth (sawtooth), I set a MouseX to control the cutoff frequency for a resonant lowpass filter, and a MouseY on an amplitude multiplier as well.
Even though I didn’t get to immerse myself in the hands-on process of the doppler swing, I was not 100% deprived of physical foley. While brainstorming, I was curious about how to recreate contact or clash sounds during lightsaber duels. Ben Burtt rarely speaks of this achievement in interviews. ShanksFX touches metal to dry ice to achieve contact sizzles, but I did not have ownership over dry ice of any sort. I actually spent an entire day trying to replicate the clash sounds in SuperCollider, which I would control with WhiteNoise and the MouseButton UGen. Thinking about stuff that “sizzles”, I considered placing water in a pan of hot oil. Due to fear of burning my house down, I instead used water with an empty, stove-heated pan. At first, I felt that the sizzles were not percussive enough, so I dumped water in the pan in one fell swoop:
My methods were successful! I did keep some of the less percussive sounds for sustained contact between lightsabers. Other experiments with clashes include throwing an ice cube in the pan, and wiping the pan’s hot bottom over water droplets. By dumb luck I found two of my samples to be strikingly similar to the Jedi and Sith saber ignition sounds!
For retraction sounds I just reversed the ignition samples.
Next up, it was time to finally record my saber swings. I searched around and discovered that a general rule of thumb for recording most foley is with mono, so I recorded my saber sample wav files into one channel. I planned to record my samples in time with the finale duel between Luke Skywalker and Darth Vader in Episode V: The Empire Strikes Back (1980). After getting in the groove of keeping in sync with the film’s swings, I recorded my first SuperCollider sample, combining the buzz and hum. Lord love a duck, it was no fun to listen to. When I was recording whilst watching the scene, I had a lot of fun. After solely listening to it, I found that referencing the movie was a not a good idea. Through the sample alone, I could hear myself constrained by the movie scene. The swinging I heard just felt awfully repetitive, restrained, and boring, and I spaced out halfway through listening. Not a good sign. I had to shift gears again!
Operation: Digital 2.0
In a quick brainstorming session, I tried to stage a fight scene of my own. The two fighters would start far apart (panned left & right), come to the center, and duel. I thought of what else would happen during a lightsaber duel. They would move around during the fight, then somebody might lose their lightsaber. I typed up shorthand “stage directions” and I have the blocking of my scene here:
After staging the fight scene, I recorded a new sample whilst imagining my own staged scene. Listening to it I did get a little bored, but unleashing some more of my creative liberty gave me more optimism. After this successful (cough cough) recording, I recorded another hum/buzz sample to portray the opponent fighter. “Unfortunately, one week younger Michael Lee was not aware that he would have to shift gears yet again. When playing the two together, he heard the most dreadful thing in his ears.” Long story short, I could hear the phase cancellation! Both fighters shared the same humming frequencies. At some points my mouse wasn’t completely to the left of my screen, and that slight varying frequency made a big difference. The fighters’ hums kept cutting out. Separating the two with panning did not help. In addition, I recorded buzz for both fighters, and when played together they were awfully disruptive.
My solution? I created an option with Select.ar for a second doppler multiplier with a range below 1. The Jedi would solely retain the veteran hum, and now the Sith would have a lower frequency range hum combined with the buzz to represent the gnarly, darker side of The Force. Now it is easier to tell who the heck is who. After finally recording some more takes for Jedi and Sith, I could get right down to mixing and editing.
SynthDefs for saber sounds in the later stage of the production
Operation: “Fix It In Post”
Of course, Adobe Audition CC is a popular choice for mixing and editing. Tis’ quite versatile, as I may bestow upon it movie sound, or recorded musical performance. However, Waveform 11 Pro had something special that Adobe Audition CC did not: Multi Sampler. Like John, I took advantage of the MIDI-emulating resources of Waveform 11 did to compile my clash SFX library. That way, I could add in my clash sounds with the press of keys and the click of the pen tool, as opposed to sluggishly dragging them into the DAW and cumbersomely editing.
Multi Sampler with my clash SFX
I felt like I had unlimited power, but there was an unforeseen cost. I did have lossless audio, but I also suffered the loss of space. With almost twenty 32-bit clash sounds imported in the multi sampler, along with all my 32-bit SuperCollider recordings, there was definitely not enough space to fit my samples into a galaxy far, far away. I did not fly into any glaring struggles with the gargantuan-sized files until the end. Exporting, compressing, and uploading were the big troubles.
Anyways, after writing in most of my clash sounds, I still couldn’t hear a fight scene. The swings of the fighters’ weapons did not sound in unison despite my best efforts. Everything felt scattered and random. I had no choice but to spam the slash key and scramble my spliced saber clips. I really hoped to keep my samples fresh in one take. The process felt like performing excessive surgery on my samples but it is still a component that makes sound editing so essential. After some more scrambling, I finally got a decent sounding lightsaber fight.
Like a song, I have different sections of the duel. I want my sounds to tell a story, but I can only do so much with the samples I had. My course of action was better than recording alongside the movie scene, but in over three minutes, “vwoom” and “pssshhh” sounds get boring. My cat agreed when I played it to her. She fell asleep.
Therefore, I had to prioritize the movement of the fight throughout the space. Soundscape and spatialization are extremely crucial for movies. Mix mode activated! I automated panning to give the fight some realistic motion. I have the fighters start on opposite sides, panning one hum left and the other to the right. By use of automation I move them to the center. To give the fight more unity, I fed the hums and clashes to another track (a substitute for aux send/return) with more automated panning to indicate movement of both of them on the battleground fighting. For transitions in the fight sections, I recorded some footstep sounds, a punching sound for when the Jedi is punched by the Sith, and the flapping of a shirt to convey the Jedi subsequently flying through the air.
Jedi track with automated panningSaber hums and clashes are fed into this track for more automated panning
Some filters were added on individual tracks. The buzz from the Sith saber overpowered the Jedi saber, so I added some EQ to boost the Jedi track. The clashes sounded too crisp, so I threw on a bandpass filter. Since a few clashes “clashed” with my ears while a few others were at reasonable amplitude, I placed a compressor to ease the sharp difference.
The final step was to create the distance (close/far) impression of the soundscape. I looked back at Huber and echoed this diagram.
From the Huber reading on mixing basics
Since I had to fiddle with volume, I exported my project with normalized audio and imported it into a completely new Waveform project for mastering purposes. My changes in depth of field are only at the end of the duel. I automated some volume reduction, EQ frequency decrease, reverb wet mix increase, and lastly some delay mix for an improved difference.
I was but the learner. Now I am the master.
In Short?
As you can see from my process I had to change my plans several times. Similar to Josh’s project, my original plan was to use purely physical foley to replicate a scene from The Empire Strikes Back (1980). The final plan was to use a mix of SC3 and foley to construct a fight scene envisioned in my head. Petersen forewarned that sound for movies can require a lot of trial and error. A sound “scene” can be much harder to design for when there isn’t even a scene at all! Oddly enough, I found some of the miscellaneous foley to be the most difficult. I never realized how exact the sounds had to be for realism. When listening to the starting footsteps I imagined two people with extremely stiff legs taking baby steps towards each other instead of two trained fighters. I kept this in because I felt it was satisfactory, and it is also a good way of noting how unnecessarily difficult foley can be.
It would be an overstatement to call the designed duel product below “the tip of the iceberg”. It can be difficult to enjoy the final result by itself, so I am contextualizing my piece here as much as I can in this blog. Working on SFX put a lot on my plate, but it is the “fun” kind of work. Once the pandemic is over I will be having more fun with foley in the campus sound studios!
Use headphones (NOT AIRPODS) for best experience. Also close your eyes if you want to imagine the duel in your head.
Tribute to David Prowse, the OG Darth Vader.
Additional Sources
Xfo. (n.d.). Ben Burtt – Sound Designer of Star Wars. Retrieved December 13, 2020, from http://filmsound.org/starwars/burtt-interview.htm
I hope you enjoyed that mp3 of my track. For my final 035 project, I integrated Supercollider with Waveform to produce some electronic music. I achieved this by generating random(ish) midi data in supercollider and feeding it into waveform for one of my lead synths.
Screenshot of Final Session (its a combination of both of these^)
Sample Hunting
As per usual, I began by searching through my sample libraries for interesting samples. I was looking for both intriguing atmospheric samples that would inspire my music, as well as atmospheric samples that would provide some nice background noise. I had never experimented with randomness in my music before, so I was a bit unsure of myself. Pretty early on, however, I found that the samples that would work best would be ones that present a musical key, yet weren’t restrictive (they would harmonically work well with every note within that key). I also selected a set of drum and percussion samples, as well as a melodic bell sample I made earlier this year in Logic.
Intro:
After opening Waveform, I dragged my atmospheric samples into the track and began developing ideas about their relative placements in the timeline. Before too long, I decided on a key and matched the BPMs of my samples. Some of the samples I didn’t alter at all, other than adding effects. A couple of plugins I used frequently were Traktion’s Bit Glitter and Melt.
As I mentioned in my previous two waveform projects, I really love these two plug-ins — when used together, they can warm up and almost “break down” a sample. Other samples I put into a multi-sampler patch, chopped up, and used to play melodic motifs using my midi keyboard.
Example of Sample I mapped in Multi-Sampler
I also added some perc loops, some of which I chopped up. The point of this section was to introduce the material with which I’d be working for the rest of the track, as well as set the mood. Thus, towards the end of the intro section, I introduced a pitched down and warmed up version of the main lead melody of the drop — a trimmed vocal sample I tossed in a sampler and played using my midi keyboard. Another notable aspect of this section is a flute line, which I played using a sampler.
Moving over to SuperCollider
At this point in the process, I moved to SuperCollider to generate semi-random midi. This process was composed of two parts — writing the actual random code, and setting up a line of communication between SuperCollider and Waveform. Professor Petersen shared with me instructions on both of these steps.
In terms of generating random data, I wanted Supercollider to pick notes that were within the scale with which I was working, rather than notes within a given note interval, a slight variation on the code that Professor Petersen shared with us. I achieved this by creating an array including the notes in my preferred scale. Then I used inf.do and .choose to pick midi notes from the array at random. I attached a picture of the code below.
Setting up a line of communication between SuperCollider and Waveform was rather painless, actually. I sent the output of my supercollider code through the IAC driver and then set the driver as the input for a Subtractive patch. Therefore, I could initialize the code in Supercollider and the generated midi would immediately be heard in Waveform.
The plucky Subtractive patch you here at the beginning and end is the midi I ended up using. I recorded about two minutes worth of midi and picked a section I thought would work well.
First Drop
By far, I spent more time on this section than any other. To start, I pulled up my bell percussion sample. I had a rough idea of what I wanted the section to sound like, drawing inspiration from Sam Gellaitry and Galimatias. The sample was already set to 130 BPM, which is the BPM in which I often like to work, so I didn’t have to do much manipulation of the sample other than chopping it at times to create a stutter effect. In terms of effect plugins, I added a bit crusher to make the sample more percussive. Therefore, it wouldn’t clash as much with the other melodic aspects of the track.
Next, I experimented with some vocal chopping and developed the chop countermelody that exists now. I knew this wasn’t the lead, but rather a complementary element.
Following this, I organized my percussive elements. The drum groove itself is pretty simple — I placed a clap on beats 2&4, and the hi-hats, which I recorded at the end, are sparse. Later on, I wanted more width in the track, so I automated panning the hats left and right. To highlight the bass hits, I coupled my 808s with kicks. I knew I wanted my 808’s to be a focal point of the piece, so I initially opted for a sample that was roundish and had a good amount of higher frequency presence. I noticed in my previous tracks that my subs had unnecessary low-end, so I threw on a high pass filter to do some light correction. A couple of days later, however, I ended up trimming some of the mid and high frequency of the bass. Aside from these melodic and drum elements, I spent a decent amount of time on little embellishments. These included chimes, sub-bass slides, risers, and impacts, which helped fill out the section.
Main Slide Lead
Surprisingly, I didn’t actually write the main slide lead until the very end of working on this section. I tried a lot of different sample chops and leads for the main melody, but none of them stuck until I came across the one you hear now. Initially, the melody was actually an octave down. It became apparent, however, that there was too much mid-frequency clutter, so I moved the melody up an octave. Because of the strange timbre of the lead, moving it up an octave led to some mixing issues. What ended up doing the trick was a high pass filter paired with an eq which took out some mid frequencies and accentuated the highs.
First Drop
B Section
In this section of the piece, I tried to depart from the choppiness of the first drop by incorporating two sustained pads, alongside a traditional lead. For my pads, I edited two instances of Subtractive and played a chord progression in the relative minor. I spent some time messing with the filters in subtractive to place the pads correctly in the mix. I also used a variety of effect plugins, including chorus, phaser, and reverb. I used some of these effects purely for their aesthetic, while I used others for their utility in the mix. Stereo fx and widener, for example, made room for key elements in the section (the lead and the bell loop).
Main Lead
I had a lot of fun with the main lead in this section — I was able to use my Arturia keylab, which has a pitch mod wheel. Editing the pitch mod wheel data post-recording turned out to be a bit unintuitive, but I’m happy I incorporated it. I found my patch was a bit boring, so I added a phaser with a very slow rate and very little feedback, which only slightly altered the sound, yet brought it to life. Other than that, the fx rack is pretty standard.
Main LeadMain Lead
Bass
I decided to switch up the Bass in this B section for sake of variety. Not only did I change the Bass pattern, but I also used a different sample. I chose a subbass that fell more in the low-frequency range than the former, so it wouldn’t compete with my pads.
Other Elements
I brought back the bell loop in this section; this time I boosted some mid frequencies using an EQ to make the loop even more percussive. Other than that, I kept the same basic drum pattern, replacing the clap with a snare for variation. I also reintroduced a sample chop from the beginning to relate the intro to the b section.
Final Section
Final Pads
I wanted to wrap things up nicely in the last drop by combining elements from both the A and B sections. I kept the same structure and feel as the A section — choppy and fx-focused –while also reintroducing the chord progression from the B section using two new Subtractive patches. The first patch was a high-frequency phaser pad, which has a weird but cool envelope. The second, a standard juno-ish pad I added for strength, for the first pad had a weak tail. At the very end of the track, I reverted back to a mysterious and dark variation of the intro. I primarily achieved this by using reverb and Melt to alter the samples.
Risers: pitch shifting and filter automation
One thing to note is that I had a really fun time experimenting with risers and section transitions. In most cases, I used traditional white noise risers, but I also developed other techniques to complement and even replace this fx. For example, before the last drop, I automated a high pass filter on a synth pad to grow anticipation. This can be heard at minute 3:00. Another technique I developed was pitch shifting. In some instances, I put a pitch shifter over my pad or sample and drew-in a linear upward automation right before the drop, which achieved a makeshift riser of sorts. The last method I used applied the “Redux” filter in Subtractive. I found that by opening a subtractive lead patch and inputting a single midi note, I could then assign and automate a redux filter cutoff, resulting in a grainy, distorted sound, making for an interesting riser. This can also be heard at minute 3:00. Oftentimes, I combined two or more of these methods for my section transitions.
Redux AutomationPitch Automation
Some Other Things I Did
-side chain compression. I used sidechain on a variety of tracks in this project. I did, however, break the habit of putting sidechain on my bass. As Kenny Beats (and others) has reminded us many times, don’t side chain the 808s.
Final Mix of the Master
I did a lot of mixing alongside my writing process, which paid off in the end. The only manipulations I did to the master were to add a limiter and an EQ, accentuating the high-end and cutting off some low-end. I’ve noticed that my headphones lean towards the brighter side, so I often end up making these tweaks.
I enjoyed producing this track. I was a bit hesitant at first, because the genre of music that I usually attempt to produce doesn’t seem at first glance to be conducive to the type of random midi generation that I know Flume, for example, incorporates in his music. Giving Supercollider a scale from which to choose did the trick, however, and I’m quite happy with how things turned out!
For our final project, Anjali and I created a song called One Sided Love in Waveform with an infusion of SuperCollider-generated sounds. Anjali did the SuperCollider part so for more on that, see her blog post! I’ll be writing more about the production in Waveform, which I did more of for this project.
Before we started we decided we wanted a Bon Iver inspired song. As a starting point for the chord progression of the song, I used Bon Iver’s
“____45______” as a starting point. However, after many transformations, it’s no longer even recognizable to the original song. This chord progression is played on a twinkly, lead pad – an edit of the 4OSC’s preset called “The Works”. The song begins with just this pad, but the chord progression remains (with just one variation in the middle) throughout the song. This is our biggest deviation from Bon Iver. Even though the form is free-flowing and individual sections are hard to discern (throwback to Deathbreast), Bon Iver tends to end his songs in a very different place from where they started. This is not the case for One Sided Love.
Here’s the lead pad:
There’s also a cosmic, droning pad that supports the lead pad near the beginning of the song. I really craved some massive chords, with lots of notes at a time between the bass and the two pads. I went to the piano and played the chord progression from the lead pad, and stacked other notes that sounded good on top. Those became the notes for the cosmic pad.
For this section, I was inspired by Grimes’ song Violence. I love her use of dramatic, abrasive pads in this song to build intensity. If you listen around 2:00, you’ll hear an instrument start that I was attempting to emulate with my cosmic pad. It’s super loud and intense, and it’s satisfying to listen to it build up and then die away. To do this, I edited the 4OSC’s preset Geographic, which I think is by far the best preset in the 4OSC and highly recommend it to everyone.
I had fun with the automation on this instrument, too. From this screenshot the automation curve looks random, but it’s just a result of me moving around the dots until I liked how it sounded. I am automating the frequency and the result is that the instrument has lots of movement. I regret not automating the pan on this instrument as well.
Here is the cosmic pad:
Anjali wrote the lyrics to this song. I will put them at the bottom of this post, but the lyrics are angsty. The song is not fully angry – it is about accepting the end of a relationship. But there is still some bitterness and angst even if it’s not full anger. This had a similar feeling for me as Troye Sivan’s album In A Dream. I attempted to produce the vocals in a similar way as his song Easy, and like a lot of his other music, the vocals are dreamy and sort of distant. Of course, I used reverb to do this.
Here’s a section of my vocals, although Anjali sings the second half of the song.
Anjali did part of the percussion for this song using the drum machine. I did the alternate percussion pattern, which comes in at the bridge. I used samples from my computer to make a swung, light percussion pattern. This served to break the repetition of the primary beat and differentiate the bridge from the rest of the song a little bit.
Bridge percussion:
For the bass, I used, yep you guessed it – the 4OSC. For a bass, it has a lot of high frequency sound, although I did some fun automation of the center frequency later in the song. There’s a track with a guitar hit from my sample library, which I actually used to determine the key of the song because I didn’t want to pitch shift the sample. Luckily, the key worked out okay for both Anjali’s and my range.
Though Anjali wrote the melody for this song, I changed the melody slightly on my verse to feel more natural. Working together was easier than expected. With the exception of the very first time Anjali sent me the zip file, it was a super easy process of collaboration. It took less than 60 seconds to import the new project at the beginning of a work session, and less than 60 seconds to compress and send the project back at the end of working. I think we did even work on this project and though we delegated some tasks to just one of us, we were both capable of doing any part of this project.
In conclusion, One Sided Love is an angsty song about leaving a relationship behind. It has a track from SuperCollider (see Anjali’s post), and draws production influence from Bon Iver, Grimes, and Troye Sivan. I wish I’d put more variation in the end of the song – this would have worked fine as a song with a length of 2:30 or 3:00 song, but our final version is 3:34. I also regret that my vocals are not a consistent volume. I probably moved towards and away from the mic while I was recording, which I’ll be careful not to do in the future.
Here is the final track and the lyrics are below! Note that the maximum file size for this blog is 50MB and the song happens to be ~56MB, so this is a resized version of the wav export.
VERSE 1:
When you were falling down, I was there for you, I was there for you
When you were at your lowest point, who was there to guide you back home?
And now, when I’m in the same place, falling down in this deep abyss
Are you gonna be there for me? Are you gonna be there for me?
BRIDGE:
Tired of this trick,
I’m just a doormat to you, I see it now
Tired of being a fool,
When you don’t care about me like that
Why-why-why
Couldn’t you just say that you don’t care about me like that
I’m tired of being a fool, thinking I could get you to love me back
CHORUS:
Instead of leading me on
You don’t return my frequency
All that I gave, there’s nothing to receive
Why do I keep going back when there’s nothing here for me?
I should just move on
From this one-sided love (one-sided love!)
One-sided love (one-sided love)
I’m moving on from this one-sided love
VERSE 2:
I see how you look at the others
I know you would never see me
In that same way
Just tell me what I’m doing wrong
Tell me what I’m doing wrong
What should I change for you? To put me at their level
Quick note before getting into the post: I’m only now realizing how long this is, and the amount of audio and video clips and screenshots I’ve included is kind of insane. So feel free to skim through, or just read the concept section below to understand my concept.
Concept
As many of you probably know by now, I love using my own recorded samples in my songs. Something about taking an everyday object and transforming it into its own instrument is so fun and fascinating to me, so for my final project, I wanted to take it a step further, and challenge myself to not use a single MIDI or electronic sound in this project. All my sounds would be sampled from everyday objects around my house, or from a live instrument like my piano or ukelele.
Preliminary ideas for sounds I could use in my house!
Running off of that concept, I sought to also create a song with more of a meaning going in, as opposed to my last two projects. I eventually settled on splitting my song into two sections: one outdoors, and one indoors, and each comprised of everyday sounds and samples from only either outdoors or indoors. For example, I couldn’t use the sound of a pot boiling for my outdoors section, or leaves rustling for my indoors section. I decided that I wanted the outdoors section to be louder and more chaotic, reflective of the chaos of the outside world and society as a whole, and the indoors section to be quieter and more intimate, sort of like life at home – the feeling of being wrapped up in a blanket next to a fireplace. Throughout both sections, I had a few main instruments as a constant, mainly the piano, guitar, and my voice (yikes), sort of representing how music is the bridge between the two worlds for me as a form of expression and communication. Real deep, I know.
Structurally, I eventually designed the song as a whole to begin with a bang with the outdoors section, then slowly warp into the middle indoors section, which eventually builds into a dramatic, intense climax that combines elements of both the outdoors and indoors. A pretty standard arc, but I did find it to be a challenge to effectively and smoothly transition between the sections, given the great contrast in both sound and mood.
My entire Waveform project!
Recording
To achieve all of this, I logically had to start by recording. And with only my measly iPhone 7 at hand (although with the excellent Voice Record Pro app), I was nervous (I ordered a mic for home, but too late…). I certainly didn’t have a shortage of sounds around me, both in my house, and in my backyard and surrounding woods, where it had just snowed. The challenge was to 1) narrow down the sounds that were actually useful, and 2) produce good quality recordings of them. This was difficult, especially with wind, trains, cars, and my neighbor’s lawn mower outside. Even inside, it was hard getting my family to be silent as I scurried around like a madman, tapping kettles and pans against the kitchen table. But I did try to use some of the close mic techniques I learned from researching my iPhone mic all the way back in Homework 2, so that certainly helped. Over this week of work, I eventually recorded over 84 samples of everyday objects and instruments (not including maybe twice as many of mess-ups and second takes). After a painstaking process of deciding which sounds to use and for what, I ended up using maybe less than a third of those. Welp.
Some of my samples!
Behind Each Track
I’m not sure how exactly to do this in a way that’s easiest to follow, but I thought I’d go through each of my tracks in the song and go into the background of all the recorded sounds and samples that went into recording them. Back again by popular demand are the original sounds, video reenactments, Waveform screenshots (when available, I wish I had taken more when I was making each individual sound for the drum samplers) and the end products. Unfortunately, I can’t include all 84 videos, although they are fun to look through for myself.
Outdoor Drum Sampler
All of the sounds in this sampler were taken from a fun walk my sister and I took outside in the woods near our house, and in our backyard. I used mostly low pass/high pass filters, reverb, and compressors plug-ins to manipulate these sounds. I still do not have a MIDI keyboard, so I had to make do with my Qwerty keyboard to input my beats.
Kick: This is simply me dropping a rather large rock I found into a patch of dead grass. It was hard recording it without also picking up the brush of grass and snow, but after many, many takes, we got a good one.
Snare: This is a combination of me hitting a solid block of snow with our shovel (the thump), and my shoes crunching in the snow (the tzzz of the wires). Also a tough one to record well and to manipulate into something resembling a snare.
Hi-Hat: This is me breaking a small dead branch. Surprisingly the branches in our area all broke rather dully, except for this single clean break that I managed to pick up.
Crash: This is me smashing an icy piece of snow against a rock. I now admit I may have cheated my indoor/outdoor rule (only on this one though!) and layered sounds of me ripping paper and drawing with a Sharpie to make the sound more convincing.
Toms: This is me smashing an icy piece of snow against the backboard of my basketball hoop. I pitch shifted it to create a high, mid, and low tom.
Indoors Drum Sampler
Again – a lot of low pass/high pass filtering, reverbing, delaying, chorusing, and compressing here.
Kick: This is me hitting a large candle against our carpet floor. It achieved the deep, muffled sound I was looking for.
Snare 1: This is me clicking a small wooden jewelry box from my mother against our marble kitchen counter.
Snare 2: This is a combination of me slamming our metal kettle against the kitchen counter (for the metallic, icy sound I wanted) and me slamming my toilet seat shut (to beef up the actual hit of the snare).
Hi-Hat: This a combination of me clicking two marbles together (for the main sound), and me flipping the light switch on my lamp (for more depth and fullness to the main sound). In general (for both outdoors and indoors), it was difficult to control the sound of the hi-hat, as it often cut through everything and stood out like a sore thumb. Eventually, with some slight panning (as hi-hats often are) and added plug-ins on the original sample, I was able to remedy this issue.
Pedal Hi-Hat/Brush: I started out envisioning this sound like a brush, but eventually it turned into something resembling a pedal hi-hat. This was me spraying our bottle of alcohol spray.
Pianos
I recorded piano parts for both the outdoors and indoors sections. I had to do a lot of experimenting and adjusting in terms of where to place my phone to record, and at what angle, to get just the right effect I wanted. The outdoors isn’t that interesting, I just applied a lot of reverb, chorus, and some phaser to give the illusion of the piano being played outside (combined with my outdoors ambience). The indoors “piano” is a little more interesting, as I mimicked the sound of a synth using heaving low pass filtering and chorus-ing, and by reversing each clip of the chords I played. I had to use a LOT of automation on this, as the nature of reversing the chords and cutting and splicing clips created a lot of pops, which I had to conceal by manipulating the volume. What a pain. Someone please tell me how to copy automation points.
Guitar/Ukelele
Not the most interesting part given that I can’t play guitar and had to again break my self-imposed rules and dig into the Smooth R&B Guitar sample library again. I used pitch shifted snippets and spliced pieces of two main loops, putting in high pass filters to make the guitar sound more tinny and distant when it first comes in, as well as reverb later on at the end for a Western-y effect. But for the random ukulele outro that I added as a whim a day before this project was due, that was actually recorded by my sister. But she’s also not the greatest at it, so we recorded each individual chord and note on their own while looking up chords online and I pieced it together from there (along with high pass filtering and reverbing to make it sound more distant and outro-y).
My sister and my makeshift studio, complete with online chord diagrams
Bass
Both the indoors and outdoors bass were just me playing on the lower register of the piano. I really wanted to find a natural sound that could mimic it—I tried the hum of my microwave, my printer, my electric toothbrush, but nothing would sound right. So I gave up and just ran my piano through a lot of plug-ins, notably a low pass filter and this cool distortion plug-in called Bass Drive to make it sound convincing. Using the piano did make it easier for me to play more complex patterns and basslines in one take rather than having to piece every note one by one with an everyday sound. This track was also one of two instances (the other being the ukelele tracks) of me using a bus rather effectively! Yay!
Ambience
I used four main ambient recordings of my own: a general recording of the outdoors (leaves rustling, birds chirping, all that) to create an outdoors atmosphere for the beginning and also to transition into the ending, a recording of my mother’s soup boiling to create a warm home-y feeling for the beginning of the middle indoors section, and a gong-like effect created with a combination of me tapping two wine glasses together and ringing my sister’s singing bowl (which is usually used for meditation) to signal the transition from the outdoors intro into the middle, quiet indoors section.
Lead Fake Out-of-Tune Trumpet
Now this is my favorite. I struggled with the right instrument/sound to play my lead melody for the longest time. I really did try everything, including my electric toothbrush again, my sister making a farting noise with her hand, door hinges creaking, assortments of bleeps and bloops from machines around the house, both nothing seemed to either fit, be capable of producing a melody, or resemble anything like an instrument. So I gave up and just sang it, and then heavily distorted it and filtered it, among other plug-ins, to make it somewhat resemble an out-of-tune trumpet. I really did try to sing in tune, and tried to use pitch shifting to correct some of it, but I eventually just decided to let it be in all of its imperfect glory. Something about it being slightly off rhythm or out of key at times is perfect for me in this song, especially given that everything around it is rather polished and in key. There’s a metaphor here that I won’t try to hard to force about me and my place in this world being just like this out of tune fake trumpet, yada yada. I’ll leave it up to you.
Mixing and Mastering/Final Product
I spent a LOT of time on the fine details of this track compared to my previous projects, especially with mixing, which would make it even more embarrassing if the mixing were bad. I paid attention to mixing throughout along the way rather than doing it all at once at the end, as well as other detail oriented tasks I’ve usually done at the end, including EQ’ing, panning, randomizing velocities for MIDI, getting rhythms precise with quantizing for MIDI and entering specific measure values for everything else, creating automation curves, editing the samples themselves within drum samplers, etc.
I also did a lot of listening through and obsessing over painstakingly small details, also sending drafts to friends throughout so that they could pick up on details I might’ve missed, even after taking a day off to rest my ears. A snare hit that may have been slightly too loud. A guitar strum that seemed to clip a little bit as it was cut off. A piano chord that seemed to be just a little bit off rhythm. An ambient sound effect that could be panned just a little more to the right. Stuff like that. I exported the entire song several times, and each time listening to it out loud, through headphones, and through a speaker. I was VERY careful (and was throughout the entire process) to make sure nothing was redlining so as to avoid clipping or distorting in the final exported versions, given that this was a problem with my last project, even though I also paid a lot of attention to mixing and mastering then. I also uploaded it into Audacity just to make sure the amplitudes were in check, and also added a compressor/limiter to my master track to be safe.
An example of helpful suggestions from friends on details I might’ve missed!Double checking for clipping on Audacity
Conclusion
Generally, I am satisfied with how it all turned out. I spent a lot of time on this, and I think I’ve come up with a piece of work that has meaning to me, and is pretty unique in terms of its concept and sound. It’s really made me appreciate just how much can be achieved without having much access to real instruments like drums, and how there is music in many everyday objects in our lives, waiting to be unlocked and found (how poetic). I had a few goals from my last Waveform project for this final, and I think I achieved most of them:
1) import sampled sounds to create my own drum sampler with more unique sounds that I have more control over (and can actually apply plugins to) – check (very glad I was able to achieve this as it was so cool and fun)
2) employ buses more, just for convenience sake – check (somewhat? But also didn’t need them as much)
3) figure out the balance between the lead and beat mixing-wise more, as I feel like I may have overcompensated a little bit with this project (as last project, I had problems with the beat overpowering my lead), and struggled with the balance this time again between the guitar and beat – check (I definitely had a far better sense of mixing, and actually went back to edit my samples within the drum samplers to achieve this balance)
5) experiment with more changes and variety in chord progression, so it isn’t just the same progression over and over again (this would require me to rely less on samples) – check (definitely! Compositionally this piece was far more all over the place and complex in a good way I hope….)
Finally, it’s been such a good time learning along with all of you this semester, and to hear all of your amazing work. It’s so cool to be able to get to know people that are passionate and talented in so many different ways, and I’ve been so inspired. Thank you also (especially) to Professor Petersen for being a fun, cool, and understanding professor through these unpReCEdENteD TimES. Hopefully we all stay in touch and maybe can see each other in an actual studio/classroom at some point.