One Sided Love, Two Sided Project

Final Waveform Workspace

For our final project, Anjali and I created a song called One Sided Love in Waveform with an infusion of SuperCollider-generated sounds. Anjali did the SuperCollider part so for more on that, see her blog post! I’ll be writing more about the production in Waveform, which I did more of for this project.

Before we started we decided we wanted a Bon Iver inspired song. As a starting point for the chord progression of the song, I used Bon Iver’s
____45______” as a starting point. However, after many transformations, it’s no longer even recognizable to the original song. This chord progression is played on a twinkly, lead pad – an edit of the 4OSC’s preset called “The Works”. The song begins with just this pad, but the chord progression remains (with just one variation in the middle) throughout the song. This is our biggest deviation from Bon Iver. Even though the form is free-flowing and individual sections are hard to discern (throwback to Deathbreast), Bon Iver tends to end his songs in a very different place from where they started. This is not the case for One Sided Love.

Here’s the lead pad:

There’s also a cosmic, droning pad that supports the lead pad near the beginning of the song. I really craved some massive chords, with lots of notes at a time between the bass and the two pads. I went to the piano and played the chord progression from the lead pad, and stacked other notes that sounded good on top. Those became the notes for the cosmic pad.

For this section, I was inspired by Grimes’ song Violence. I love her use of dramatic, abrasive pads in this song to build intensity. If you listen around 2:00, you’ll hear an instrument start that I was attempting to emulate with my cosmic pad. It’s super loud and intense, and it’s satisfying to listen to it build up and then die away. To do this, I edited the 4OSC’s preset Geographic, which I think is by far the best preset in the 4OSC and highly recommend it to everyone.

I had fun with the automation on this instrument, too. From this screenshot the automation curve looks random, but it’s just a result of me moving around the dots until I liked how it sounded. I am automating the frequency and the result is that the instrument has lots of movement. I regret not automating the pan on this instrument as well.

Here is the cosmic pad:

Anjali wrote the lyrics to this song. I will put them at the bottom of this post, but the lyrics are angsty. The song is not fully angry – it is about accepting the end of a relationship. But there is still some bitterness and angst even if it’s not full anger. This had a similar feeling for me as Troye Sivan’s album In A Dream. I attempted to produce the vocals in a similar way as his song Easy, and like a lot of his other music, the vocals are dreamy and sort of distant. Of course, I used reverb to do this.

Here’s a section of my vocals, although Anjali sings the second half of the song.

Anjali did part of the percussion for this song using the drum machine. I did the alternate percussion pattern, which comes in at the bridge. I used samples from my computer to make a swung, light percussion pattern. This served to break the repetition of the primary beat and differentiate the bridge from the rest of the song a little bit.

Bridge percussion:

For the bass, I used, yep you guessed it – the 4OSC. For a bass, it has a lot of high frequency sound, although I did some fun automation of the center frequency later in the song. There’s a track with a guitar hit from my sample library, which I actually used to determine the key of the song because I didn’t want to pitch shift the sample. Luckily, the key worked out okay for both Anjali’s and my range.

Though Anjali wrote the melody for this song, I changed the melody slightly on my verse to feel more natural. Working together was easier than expected. With the exception of the very first time Anjali sent me the zip file, it was a super easy process of collaboration. It took less than 60 seconds to import the new project at the beginning of a work session, and less than 60 seconds to compress and send the project back at the end of working. I think we did even work on this project and though we delegated some tasks to just one of us, we were both capable of doing any part of this project.

In conclusion, One Sided Love is an angsty song about leaving a relationship behind. It has a track from SuperCollider (see Anjali’s post), and draws production influence from Bon Iver, Grimes, and Troye Sivan. I wish I’d put more variation in the end of the song – this would have worked fine as a song with a length of 2:30 or 3:00 song, but our final version is 3:34. I also regret that my vocals are not a consistent volume. I probably moved towards and away from the mic while I was recording, which I’ll be careful not to do in the future.

Here is the final track and the lyrics are below! Note that the maximum file size for this blog is 50MB and the song happens to be ~56MB, so this is a resized version of the wav export.

VERSE 1:

When you were falling down, I was there for you, I was there for you

When you were at your lowest point, who was there to guide you back home?

And now, when I’m in the same place, falling down in this deep abyss

Are you gonna be there for me? Are you gonna be there for me?

BRIDGE:

Tired of this trick,

I’m just a doormat to you, I see it now

Tired of being a fool,

When you don’t care about me like that

Why-why-why

Couldn’t you just say that you don’t care about me like that

I’m tired of being a fool, thinking I could get you to love me back

CHORUS:

Instead of leading me on

You don’t return my frequency

All that I gave, there’s nothing to receive

Why do I keep going back when there’s nothing here for me?

I should just move on

From this one-sided love (one-sided love!)

One-sided love (one-sided love)

I’m moving on from this one-sided love

VERSE 2:

I see how you look at the others

I know you would never see me

In that same way

Just tell me what I’m doing wrong

Tell me what I’m doing wrong

What should I change for you? To put me at their level

Oh I don’t know I don’t care anymore

But why do you still make me feel this way?

Hard to Leave You

Going into this project, I decided I aimed to finally make a song that sounds good without an obscene amount of reverb. Reverb makes sounds that are not well-recorded or well-chosen sound less bad when put together. Also, I particularly love songs with extremely open, reverb-heavy sounds. These have both led me to drown my projects in reverb in the past, and for this project, I decided to make sure it sounded okay without reverb too.

The song is about two people I was close with in high school, and my feelings about leaving each of them behind when I came to college. My theme is leaving the past behind – because I am also leaving bad production techniques in the past!

The song started as just synth hits on the 40SC, a synth bass from the 40SC, and a kick sample and a snare sample that I acquired two years ago. I was feeling great about it and it was the best four-track song seed I’d ever made. I showed it to some friends who said it reminded them of Charlie Puth. He’s one of my favorite pop producers so this was a great compliment, and I decided to roll with this for the rest of the song. Below is the song after one day of working on it (plus the effects I later added to these tracks).

 

I made extensive use of lowpass filters on this song. I wanted it to have a dark, sultry, and reminiscent mood, and putting lowpass filters on many tracks helped me achieve this. I put one on both of the drum tracks and multiple other miscellaneous tracks. I also think that automating a lowpass filter sounds really good in intros and outros (for a fade-in or fade-out) although I’ll admit it is a little bit gimmicky and I don’t normally hear it in real songs.

My final Waveform 11 workspace will all tracks!

I had a great relationship with the 40SC synth in this song. The presets were enlightening in how the 40SC synth works. I spent a lot of time isolating one of the waves in a multi-wave preset to see how they interacted with one another. I ended up using only modifications of presets in this song, but I feel like I could make a decent synth sound on my own with the 40SC.

My lowpass cutoff frequency automation on the 40SC synth during the chorus.

Subtractive was much more difficult to use. I wasn’t skilled enough to create a synth sound in subtractive that sounded like it fit with the rest of my song. No matter what I tried it sounded too grating. By this point I also had plenty of synths from the 40SC, so I decided to go for white noise in Subtractive. I made a sweep with it that I used a few times in my song by automating the cutoff frequency on a lowpass filter.

I wrote the lyrics and recorded in the practice rooms in the basement of Pierson College. I recorded close-mic style so that I would not have to bring down the levels of all of my other tracks by an extreme amount. Here’s a sample of the vocals, with WAY less reverb than they originally had. Looking back on it, I think I still could have backed off on the reverb a little bit. Even without pitch correction, I actually feel okay about how these vocals turned out. Here is a section below to hear my vocals with their final processing.

 

There’s also a vocal harmony that I recorded and two vocal effects – both “ooh”s. One of them you can hear at 0:19 on the audio at the bottom of the post. The pitch is atrocious because it’s not in my vocal range and I probably should have gotten a friend to record it. I still think it adds a nice touch and I drowned it in reverb, which I think was appropriate. It acts as sort of a whistle or a chime and it is supposed to sound really far away, like the whistle at the beginning of Magic by Coldplay.

In the future, I would like to find a better dynamic balance between the vocals and all of the other tracks. I think the vocals ended up too loud on the verses and too quiet on the choruses.

The drums are sparse, and I realize that there is no hi-hat in my song. I started with the kick and clap and moved on to other tracks, and I developed a dark tone that I decided sounded better with no hats at all. The drum tracks both have lowpass filters on them for this effect.

The tempo of the song is 90 BPM which is fairly slow for a song of this genre, but I have a quick melody and a steady sixteenth note synth bass, so the song does not feel slow.

Okay, below is the full song. My main takeaways:

  • I still need to rely less on reverb, and maybe experiment with more dynamic effects – compression, chorus, delay, etc.
  • I learned how to use and automate a lowpass filter to sound super good (hopefully you agree!)
  • Gained lots of experience with the 40SC, still need some practice with subtractive
  • Learned how to make a bus

Cassidy – Battles with Waveform 11

Day 1: 

I began my Waveform journey on September 20th. I tried to find as much in common with Logic Pro X as I could, since I’m very familiar with that software. I tried to use Logic keyboard shortcuts many times through this

process. My first instinct was to look for software instruments and create a MIDI track, but I failed to do so – which was probably for the better. I dug up some old samples I downloaded a few years ago, and in my project I mostly used these.

Day 2:  

On September 23rd I put audio files into my project. I started with a synth lead loop that I really liked, and I added drum one-shots until I had a beat. Unfortunately (and this is something that happens a lot for me when I’m making music), I had a realization that the synth lead loop I was using sounded similar to another song (Good Thing by Zedd and Kehlani). I couldn’t get it out of my head, and I tried to shift the song in a different direction, but no matter how hard I tried I just thought of Good Thing. This sample is below. This was extremely limiting in my ability to think of other creative lines so I scrapped the synth loop and swapped it for a slightly slower (92 bpm) plucked guitar loop. I adjusted the drum beat I had slightly and made the kick hit on all four beats (I think this is called four on the floor) to keep the song sounding uptempo.

 

Day 3: 

First, I tightened the beat by putting EQ on each of the drum tracks which made it sound less messy overall. I thought that using only a clap as a snare sounded empty, so I backed it up with a snare drum as well. I put reverb on the clap to designate it as the wet part of the sound, and kept the snare drum dry so I had both wet and dry at once. 

The EQ I used on one of my drums. After learning more about filters, I am not sure that this was the best effect to use.

I added a crackle pad for my ambient sound and turned the volume way down. In the past I have gone a little crazy with ambient sounds in the background as a crutch, because the other tracks don’t sound like they fit together. Chirping birds is a personal favorite of mine, but in this song I went with the crackle to give it a lo-fi feel.

My volume automation curve on the crackle track. It fades out, then comes back in when the synth hits. This is a bit like sidechaining, which I will probably try to use next time instead.

Then began my battle with automation. I attempted to automate the volume of the crackle to cut out on the fourth beat every two measures right before the synth hit to give it emphasis. First I tried it intuitively but it wasn’t working, so I watched many videos. I was puzzled to see people doing the exact steps I was doing, but their automation was working and mine was not. In the end, it turns out I was doing everything correctly, but my automation was turned off. I must give credit to John Brockett for this idea. Eventually I found the green button, turned on automation, and breathed a sigh of relief.

The green button in the top left is what toggles automation on and off, in case anyone’s automation is mysteriously not working!

Day 4:

It was time to add vocals, which meant it was also time to decide what I wanted this song to be about. The slightly detuned synth hit was reminiscent of Glass Animals’ Dreamland album, which reminded me of a trip my friend and I went on to Vermont, when we listened to Dreamland nonstop. The acoustic guitar also brought me feelings of nature, so I wrote some lyrics about the Vermont trip and the sights we saw. The lyrics are mostly a list of the best things from the trip. We were at a woodworking school – Cassidy was a friend we made, and Swami G was a dog who lived in the school.

My vocal reverb

I recorded a few times in the bathroom since I knew I wanted the vocals to have more echo than the rest of the tracks. Unfortunately, recording far from my phone’s microphone made the recording quiet, and I had to bring down all the other tracks substantially – at least 12 dB. I decided instead to record very close to the microphone, and add reverb digitally to create the echo I wanted. I moved to the practice room in the basement of Pierson and this recording was successful. I doubled the vocal track and offset one track slightly, and pitched it up ten cents to make the sound fuller. I also designated one track as wet and the other as dry. I put chorus and reverb on the wet track and an EQ on the other track. 

In Logic, I’m used to using pitch correct on individual notes and formant shifter to get my voice to sound better. This recording was not fun to listen to at first because there are no vocal enhancements on it at all, however I am desensitized to it at this point and now I am okay with it. In retrospect, I buried the vocals in a little more reverb, and made them quieter than they should have been, so I didn’t have to listen to the raw vocals.

To enhance the trance feeling of the song, I used automation to give the plucked guitar loop a wobble, by automating the pan in small oscillations from left to right. 

Overall takeaways from the project:

  • Getting the recorded audio to sound like it fit with all the samples was difficult. I eventually got there with some effects but without them, the vocals sound much closer than the rest of the tracks. This may be because the other tracks are already echoey. I would love to use a microphone that does not require close miking to get a decently loud recording – this would allow me to hear the character of the room in the recording.
  • This project was a discovery of many different plugins. I learned that a little bit of reverb goes a long way, and like Michael, I learned that pitch shift is very helpful for combining samples originally in different keys.
  • Waveform does not have software instruments! Unless I’m not looking in the right place. This was a tool I utilized a lot when working in Logic, so this was a new experience for me.

Here’s the final track! It is called Cassidy.

Microphones & Recording on iPhone XR

The iPhone XR has three microphones: one near the top of the front face of the phone, and two on the bottom edge. The bottom microphones are higher quality and are automatically used when recording on all of the apps I tried. I haven’t been able to figure out what the top microphone is for, but after reading Bill’s post, my best guess is that it is for noise cancellation. The iPhone XR microphone can record a maximum of 32-bit audio at 48kHz, and has options for both mono and stereo recording. I determined this by testing out different settings in my recording app and seeing which settings it permitted and which it did not.

Apps

I searched here for recording apps on iOS, and I only needed to try out two before I found one that had everything I needed.

  • Hokusai 2 records uncompressed, 32-bit audio and allows exporting in a .wav format, but it does not have an option to disable automatic gain control.
  • TW Recorder records uncompressed 32-bit audio and allows exporting in a .wav format, plus it has an option to disable iOS processing, which “automatically adjusts the microphone gain”. I believe this is an AGC control, so I have settled on TW Recorder. TW Recorder is great because it has a good user interface with editing tools, and allows both stereo and mono recordings at custom (up to maximum) sample rate and bit depth. Michael Lee recommended this app to me.
Settings in TW Recorder
Main interactive interface in TW Recorder

Generating Audio:

I used Audacity to generate a sine sweep from 100 Hz to 3kHz over a period of 18 seconds with an amplitude of 0.8, and white noise for 18 seconds with an amplitude of 0.8.

Recording method:

  • For my close recordings, I held the phone right above my left computer speaker so that the microphone was pointed directly at the sound source. I put my computer volume at about half of its maximum.
  • For my far recordings, I placed the phone slightly away (about an inch) from the left computer speaker, and laid it flat on my computer. I set my computer volume at about ¾ of its maximum or slightly higher.
  • My first location was in my dorm room with the door closed and the windows open. It’s a double, so the room is large.
  • My second location was in a small practice room in the basement of Pierson college. Since the microphone was extremely close to the speaker, I am guessing the size of the room did not affect the results.These graphs were generated in Audacity using the “Plot Spectrum” function, while selecting a portion of the sample.

White Noise Observations:

  • All of the frequency analysis graphs of the recorded audio look significantly different from that of the sample audio
  • Though the results are not overwhelmingly clear, the close recordings appear to have slightly flatter graphs than the far recordings, though 
  • All of the graphs peak around 8000Hz, then quickly drop off

Sine Sweep Observations:

  • All of the graphs have a steep dropoff at 3000Hz, which was the highest frequency in the sine sweep
  • The close recording have slightly flatter regions from 100Hz to 300Hz than do the far recordings
  • These graphs are all much smoother than the white noise graphs

The only trend with reasonably significant evidence among all of the trials is that close recordings produce flatter spectra than do far recordings. Here is the link to the sound files I used.

Japanese Broadcasting Company – NHK Studio

The NHK Studio at its opening in 1955

The Japanese Broadcasting Company (known in Japan as the NHK) is Japan’s national radio and television network. Its relevance to computer music is through the NHK music studio, which was founded in 1955. Upon its opening, the studio had a Melochord, a Monochord, various oscillators, modulators and recorders. 

The NHK studio attracted composers from Japan and all over the world. Because of its location in Japan, it was uniquely situated to inspire foreign composers to draw Japanese influences in the music they composed. The NHK studio deserves a place in computer music history – NHK compositions like X Y Z, Telemusik, and the Olympic opening theme combined recordings of traditional sounds with synthesized ones in a novel fusion of time periods and cultures.

Telemusik is a piece created by German artist Karlheinz Stockhausen at the NHK studio. His goal? A piece of music that served as a universal language, bridging the gap between the cultures of Germany, Japan, and other parts of the world. Telemusik’s fusion of sounds from all over the globe makes it unique – Stockhausen aimed to find new sounds by combining recordings taken from far away from each other, for example, a rhythm of Hungarian music with a Japanese monk’s chant.

Cover art for Karlheinz Stockhausen’s Telemusik

Stockhausen used NHK’s six-channel tape machine to achieve Telemusik. NHK’s tape machine boasted a long list of impressive features for its time, including square-wave, sawtooth-wave, and pulse generators, as well as many filters and recorders. For Stockhausen, Telemusik was unique in that he intentionally left identifiable recorded sounds in the final mix, rather than burying them in effects. Stockhausen’s experience in traveling to Japan to record the piece is especially important in understanding the combination of sounds. As well as for other artists, the physical setting of the NHK studio inspired him to incorporate elements of Japanese culture into the music he created.

Japanese composer Toshiro Mayuzumi used the NHK studio to produce music that combined Japanese music style with French, after having studied at the Paris Conservatory. His piece called “X Y Z” was the first piece of Japanese music to use electronic tape manipulation techniques – a medium found in an early French electronic style called musique concrete. “XYZ” included a section of pure electronic sound, a section of natural sound, and a section of traditional instrumental sounds. These three sections are easily identifiable in the following video, but remarkably, they are all part of the same piece.

This is another theme in music produced at the NHK. These pieces often feature multiple sections with extreme sonic variation, and yet they are synthesized into single compositions. In listening to pieces from the NHK, I noticed that it was common to find one completely electronic section, one natural section, and one traditional musical section – although many pieces feature combinations of these as well.

The opening ceremony of the 1964 Tokyo Olympics

The NHK produced music with influences from all over the world – including the US. Toshi Ichiyanagi, a Japanese composer, was inspired by a trip to the United States, where he experienced American pop, hip hop, and psychedelic music from the time period. In Ichiyanagi’s piece Tokyo 1.9.6.9, he utilizes a collage of recorded sounds, American music influences, and synthesized Japanese vocals. 

The Olympics are the fundamental example of cultures coming together, and unsurprisingly, the NHK made its way into Olympic history. On the same theme of connecting cultures as Telemusik and Tokyo 1.9.6.9, NHK commissioned composer Toshiro Mayuzumi to create a piece of electronic music for the 1964 Tokyo Olympics’ opening ceremony. Mayuzumi utilized NHK’s multi-channel tape machine, combining Japanese bells and synthesized electronic sounds. This fusion boasted a fresh, modern sound, while retaining an element of Japanese culture.

The music produced from the NHK studio tells a story of cultural fusion and technical diversity. Its equipment and physical location made works like “X Y Z” and Telemusik possible – collaborations that would have been otherwise impossible.

Sources:

Loubet, Emmanuelle, et al. “The Beginnings of Electronic Music in Japan, with a Focus on the NHK Studio: The 1950s and 1960s.” Computer Music Journal, vol. 21, no. 4, 1997, pp. 11–22. JSTOR, www.jstor.org/stable/3681132. Accessed 9 Sept. 2020.

Cunningham, Eloise. “MusicalAmerica – MUSIC AND THE OLYMPICS.” Musical America, www.musicalamerica.com/features/?fid=301. 

Dayal, Geeta. “Stockhausen in Japan.” Red Bull Music Academy Daily, 17 Oct. 2014, daily.redbullmusicacademy.com/2014/10/stockhausen-in-japan. 

Kozinn, Allan. “Toshiro Mayuzumi, 68, Eclectic Composer.” The New York Times, The New York Times, 11 Apr. 1997, www.nytimes.com/1997/04/11/arts/toshiro-mayuzumi-68-eclectic-composer.html.