PROJECT OVERVIEW:
This project I made was meant to give more of an eerie, off-putting, sad feeling (what’s a better way to encapsulate midterm season, am I right?). I was very hesitant to put in my vocals because I think my voice is annoying, but I decided to in order to give that kind of breathy vocal sound that I was able to exploit to my liking using Reverb. I really went along with this eerie idea once I found the Cloud 1 Subtractive synth sound, which was a part of the Ambience category when looking for different subtractive MIDI sounds, and it really worked to my liking to create the odd feeling.
MIDI:
For MIDI, I chose to make four tracks – one I called “scintillating,” which was the arpeggio subtractive synth that is constant throughout the sound except for when it drops out during my vocals. The next MIDI is the cloud ambience, which makes you feel like you are in outer space. The third and fourth MIDIs were combined, and influenced heavily by our drum sampler MIDI tutorial lecture in class which I rewatched ~many times~ to get this right. It was the drum sampler and STEP track (I really liked using a STEP to add “flavor” to the drums).
For each of the MIDI tracks, I experimented with the 4-band equalizer, as hinted at in the prompt for this homework. Using the EQ, I was able to scrape the higher frequencies and not allow for anymore clipping like in the last project I submitted (yay for progress!). This was especially important, in tandem with the compressor, for the drum sampler MIDI recordings, since the hi-hat’s decibel level would go way too high when looking at the bottom left corner of Waveform PRO that shows the sound levels. With the compressor and EQ, I was able to handle this problem. Below is a screenshot of the step and drum sampler pattern I created in the beginning of the track, where the step pattern does not continue at the end.

The other MIDI I used was the Cloud 1 subtractive synth, which I used a lot of reverb on through the aux bus, to make the “room bigger.” The other MIDI connected to the aux bus was the drums and step. I also used the nonlinear space plugin on the Cloud 1 subtractive synth, but wasn’t really sure if I heard a difference. A really good compressor plug-in was the AUMultipleBandCompressor. The MIDI was hard to navigate at first because for a whole first two days, I thought there was something wrong with my virtual MIDI input and Waveform itself, because I couldn’t input values and hear the sounds. When I went back to the lecture, I still was confused as to how to make those sounds. Finally, I realized I needed to choose the output to be the MIDI and then play on the piano that pops up virtually on the left of the track.

For the “scintillator” MIDI track, which was actually called “Comb Harp JH,” I was able to create an arrangement/pattern that repeated throughout going through notes. It was interesting because this MIDI would not really stop playing even when pausing the track. I also used the automation on all/most of the MIDIs, using the volume and pan automation. The volume automation was used for the Comb Harp JH in order to quiet it down in the middle then bring it back up. I also used the volume, pan, and reverb automation on the vocal track. The most important plug-in for the aesthetic of the piece, I’d say, was reverb tweaking the “room space” because that was the important heady/airy quality that was given to the Cloud 1 synth and the vocals, making them sound much more spanning/full. Below is the automation curve and pattern for the Comb Harp JH synth.
For the vocal track, I made an interesting panning automation so that it could sound like I’m singing in different ears, where it goes from left to right as the vocals progress.
You can also see in this image that there is a overlap in vocal tracks that I recorded. This is because at one point, I didn’t really like the final note vocal thing I did, so I wanted to delete it. But, because Waveform doesn’t let you delete in between the tempo lines, I had an awkward gap in between vocal words that I kept for about 5 days because I didn’t know what to do with that gap and thought I could just use Echo or Reverb to carry over it. But then, I realized I could create a cool sound by overlapping the vocals and it did just that! I was able to create like a sense of call and answer very quickly like a cry of help in the vocals, giving that kind of lonely feeling.
SAMPLES:
For the samples, I used a synth sample called “Afloat Pad” which I used the phaser plug-in (it’s kind of like 40SC but easier to use in my opinion) and AUDynamicsCompressor for. I think the 4-band equalizer is much better than the AUDynamicsCompressor to be honest. I didn’t really hear a difference with this plug-in. For the vocals samples, I recorded some onto my computer using the MacBook Pro mic (these were the quieter samples that I had to use the EQ to bring the volume up on) and the one on my phone (that I had to use EQ/Compressor to bring the loudness down on). It was interesting how I could apply the EQ/Compressor to different segments within the same track to produce a different result.
MAIN TAKEAWAYS:
Now I know how to modulate volume and ensure no clipping occurs! And, I don’t really like the Distortion plug-in. I tried it once on the Cloud 1 synth, and it sounded so horrible I immediately needed to take a break from the project. Doing this project also made me realize how things can blend together in music if you spend a long time on the same thing, and I really needed to take frequent breaks to prevent “ear blindness.” It was fun messing around in Waveform again, however, and I’m so glad I’m learning so much from this class!
Below is the Final MP3 result. Thank you so much!