Adding MIDI in Waveform

Where to start?

To start this new Waveform project I decided to inspire myself a little bit by the song Magic by Coldplay that we listened to and analyzed in class. Meaning, I wanted to work on adding different layers of songs on top of each other to create a sort of full sound. Although I wasn’t trying to build the song the same way in the same slow build Coldplay did (because I felt like I had already kind of done that for the first project), I wanted to reach that same high in the song they did (encompassing a large amount of frequencies). 

I started my project by thinking about how I could construct multiple layers of tracks to find what I was looking for without getting out of control and ending up with a project that pounded disorganized and out of control. Listening to Magic I realized that it was best to keep the tracks as simple as possible. I tried to use fairly simple synth sounds and melodies (in comparison to the exotic laser sounds I used in my previous project). I also wanted to incorporate more vocals. Although I have a terrible voice and have no confidence in my singing ability, it is luckily very easy to find loops of vocals.

 

Loops and Samples

I started off this project by looking for different loops and samples online on Looperman.com. I found this artist who made two different vocal loops that were part of the same project of his and I thought they went very well together. I added a third Vocal loop to form my music production software to add a new lawyer to the vocal group (the third vocal is a person saying hey). In addition, all the drum and percussion sounds were mostly loops I made using a different music production software. It is where I made my hi-hats loop, drum, snare and kick loops, as well as my synth loops, etc… For the hi-hats loop I worked on varying the volume of the different hits to give it a more natural feel. In certain parts I experimented with the volume to give a rising feel. I kept the snare loop quite simple so that it wouldn’t overpower the hi-hats I wanted to keep emphasis on. 

For the kicks, I had a total of 3 different tracks (with three individual kick sounds). The first and second kick loop were exactly the same beat sequence, but played with different kick sounds. The first kick which I liked and was the one I wanted to use didn’t pack enough of a punch in the lower frequency range (30 to 100 Hz range) so I used a second kick to fill in those frequencies. The third kick I used when I wanted to fill the lower frequencies even more. The third kick is the heaviest of the three and carried the lower frequencies sounds when I layered all 3 kicks together. It did have some distortion that I liked a lot but didn’t work in the context of my project and ended up getting rid of most of it. I did keep a little distortion though.

In my project I had 5 different synths, 1 of them being a Subtractive synth, and all the others being loops I made. When I created my Subtractive synth I immediately knew what other synths (from the software on my computer) I wanted to be added and how I would fit them into my project. Because they worked so well I ended up using a third synth that would lead into the “chorus” parts of my project and a fourth one as a lower bouncy and wonky type of synth in the “verse” parts of my song. 

The cooing sound in the project is a bird sound I found on Looperman.com that I decided to add because I loved it so much. I used it mostly as a pre-drop sort of sound that would lead into the “chorus”

MIDI

For my MIDI tracks I wanted to use two completely different sounds in my project to show how Subtractive could be used in a variety of ways. My first MIDI track was a Subtractive synth called Dark Drone; a synth that when used in the higher octaves gave a pretty mellow easy going feel which I thought worked will with the hard hitting kicks I wanted to use in my project. I used EQ to bump up the range of frequencies from 30 to 2k to emphasize the droning of the synth. I used a simple melody: C1 / G1 / C2 / G1 / C2 / G1 and then C1 / G1 / C2 / G1 / C2 / D2# because it went very well with the vocal loops I found. 

My second MIDI sound was a sort of sweeping sound (not sure how to describe it) called Thunder. It’s as simple as it sounds, the synth sounds like Thunder. I used this as a sound at the very beginning of each “chorus” kind of like an entrance to what comes next. I also used it as the outro of the song. After the buildup of the project, all instruments stop abruptly, and then I have the Thunder sound to lead out of the song.

Plugins and Mixing

 

The main plugin used as usual is Reverb. As a wise man once said “You can never use too much Reverb” (Prof. Scott Peteresen), and I have lived by those words in this project. I have Reverb in most of my tracks (I am pretty sure all but some of the kicks), in fact I sometimes put Reverb twice to see if anything cool would happen (cool things did happen).

Examples of settings I used for Reverb

 

I had EQ on all my tracks in order to emphasize the different particular frequencies I wanted, as that is where I struggled in the first project. I made sure to emphasize the higher frequencies for the vocals and the lower frequencies for the kicks. For the synths it was more complicated because I had to adjust the EQ according to what kind of Synth I was working with. On some of the Synths I wanted a deeper feel (30-100 Hz) and on either I wanted a more mid to high frequency range (700-2k Hz). 

I also used the chorus plugin on a few of my tracks as well as a few “special” plugins such as vibrato on the bird sound in my project to give it a more electronic 

 

Automation

I didn’t really use much automation. For some reason I felt like my project worked without a lot of it (this might be a grave mistake). However, I did use some panning for some of my drum sounds and vocals. The drum track panning is fairly obvious, we can hear it panning from right to left, almost as if it’s spinning around your head; on the other hand, the vocal panning is less obvious. I used slight panning on the vocal track where we can hear someone saying “Yeah”. I used slight panning that diminished over time with the beat delay also affecting the “Yeah”. I put one of the other prominent vocal tracks on the other side of the panning spectrum to create a sort of conversation feel to the vocals.

 

Errors and Obstacles

The main obstacle was having to deal with my project being deleted on day 3 of work. After putting together the vocals and kicks of my project, I realized that I didn’t have the Subtractive plugin in my Waveform which made me realize I had been using Waveform Free this entire time. When trying to unlock Waveform Pro I came across a few problems which ultimately led to my project being deleted and having to start again. Luckily I had my ideas and loops ready and I was able to recreate what I had lost. Unfortunately, it did take me a lot of work and time to recreate all the EQ and plugin changes that came along with the work I had previously done. 

Another problem is that my final project was taking up a lot of CPU usage when it would play in Waveform and it would sometimes just freeze or lag. I realized the best way to go about this was to not put some of the same plugins on individual tracks, but rather send the tracks to a Send Bus and add the plugins there. This helped a lot and my project wouldn’t freeze or lag anymore

One of the main obstacles during this project was creating the MIDI tracks. I found it very hard to effectively use my keyboard as keys and have the notes I wanted play at the accurate time. It wasn’t until after I had finished the project that I realized I could transfer MIDI clips from other DAWs which could have been a lot easier and would have allowed me to use a lot more MIDI in my project

My Project

 

A Ribitting Final Project

The start of a fun project

For the last project of the year, I wanted to make a project that would be the culmination of all we have learned in the first-year seminar. I knew right away that I wanted to make a Waveform project because those are the projects I enjoyed the most making. I first needed to find a way to put together everything we’ve learned in order to make this project work. Recording and mixing would come naturally with the project, but incorporating SuperCollider less so.

I knew right away I wanted to make a very upbeat and fast moving music project, this needed to be the celebration of a great first-year seminar, not a sad goodbye. I needed this project to have more energy than the last two and provide some “good vibes” as one would say. Now I just needed to figure out how to put an abstract idea such as “good vibes” into notes, frequencies, sounds, etc…

 

Loops and Sample

I first needed to do a little bit of research on Spotify. I clicked on my happy songs playlist and narrowed down the “good vibes” into some tangible variables. I noticed that most songs have at least a simple catchy melody and some hard hitting drums (especially some good snares and hard hitting claps). I probably missed a lot of other key criteria but those two two things popped out the most to me. I started by looking through different synth noises in an attempt to find something I could use. I originally wanted to use a synth sound from a loop I found on Looperman.com, but ultimately decided that I would rather make my own. Most of the Subtractive synths didn’t feel like the right fit for the project I had in mind so I ended up making my own samples with a plugin synth from my music making software (another DAW). I made 4 different samples but used the same sets of chords, and just made some notes more staccato to add some punch. 

As usual, I decided to make my own drum beats and percussive sounds because I am not a big fan of drum samples. I like to make the drum loops and samples myself (weather that’s using MIDI or a beat sequencer) mostly because I give importance to the beat over the melody. I tried in this project to keep the idea of using some powerful snares and claps, so I ended up using fewer closed Hi-Hats. I ended up with 3 different snares that I switched around in the project depending on when the chorus or verses played. As per usual, the bulk of my drum work was in the kicks. I had a total of 4 different samples, all dominated in different frequencies so that putting them together in the chorus wouldn’t overwhelm the (30 to 100 Hz range). Although I didn’t have too many closed Hi-Hat samples, I did have a few open Hi-Hat samples (with only a few hits) as well as some rides that I may have overused at certain points, but I thought (and still think) it actually worked quite well.

The samples I ended up taking for my project were that of frogs croaking as well as a ribbit sound from the website orangefreesounds.com:

 

To have some fun, I decided to throw in a curve ball in the beginning of the project. I started the project very slow with what sounded like some sort of soundscape and then I interrupted it quickly to start the upbeat music I had prepared. In addition I did use a few sweeping samples I found on Looperman.com to lead into or out of the chorus in the beginning.

 

Recordings

There aren’t that many recordings in my project, but there are enough so that they cannot be ignored. I have no faith in my singing ability, so I did not create any long audio clips, but I had just listened to Kid Cudi’s new album and I was inspired by the ad libs in his song Sad People. Although it is quite ironic that I am taking inspiration from a sad song to make a happy one, I really wanted to have some fun making some adlibs in my project. I sat in front of my Iphone microphone saying random things and making noises, only to be displeased with almost everything. After a few more attempts I was done and had a few usable audio clips.

The first recording is that of me saying “nah…nah nah nah nah, we gotta change this up” which I used as the transition from my intro into the main part of my project:

 

I originally wanted to have me say “yeah” and then have the switch after that, but I thought it would be more entertaining to have me talk a little more in the beginning and use the “yeah” later. I think it made for a better transition overall, because I was able to have a moment where all the music stopped while I was talking. The second recording is of me saying “yeah” which I changed the pitch by 2 semitones to create a deep voice.

Because I was recording with an Iphone microphone there was a lot of background noise getting in the way. After doing some research online I found out that I could use audacity (which I luckily already had downloaded) to reduce the unwanted noise. This actually helped a lot in making my recordings smoother and sound a lot better. 

 

MIDI

I used Subtractive to create some sound effects in my project, and created two different synth sounds in the intro to give a strange effect. I again used Subtractive to make a bassline synth during the chorus, even though I don’t really like using subtractive as a plugin to make the melodies for my project. I always seem to use them more for interesting sound effects. One of the synths used was called Mordor 2 and I thought it fit perfectly in the strange intro sequence with all the croaking frogs as well as the weird bowed strings. The synth used for the baseline in the chorus was simply a soft background synth that complimented the chords of the melody. Originally I thought of adding an extra layer of synth in the higher frequencies (2k to 3k Hz range), but I ended up thinking it was too much and it ruined the overall feel of the song.  

 

SuperCollider

SuperCollider was the spark to the great idea of having a strange intro to the song. When I was looking through the different sounds and came across the class BrownNoise.ar (from superclass WhiteNoise.ar) which generates noise with a spectrum that falls off in power by 6 dB per Octave (example of this class being used):

Using that with OnePole.ar (example of this class being used):

 A low pass filter and a resonant high pass filter (example of this class being used with SinOsc):

I was able to obtain a sort of bubbling sound:

 

Plugins and Mixing

 

I didn’t go too crazy with the plugins and kept them quite simple. This is about the most I put on a single track:

I put a few extra plugins on my Subtractive Synths, however, I mostly kept it simple for the rest. I stuck to simple plugins such as EQ and Reverb. I added a little of Reverb to certain tracks but kept it to a minimum. I tried experimenting with reverb for this project, but I felt like it altered the choppy feel of the main synth at times and didn’t line up well with my recordings (that sounded better with low reverb). Because of this I mostly stuck to using a very low reverb:

As always, I used the EQ to fix some errors with the frequencies of certain samples and sounds. For example, I needed to cut some of the higher frequencies (80Hz to 100Hz) for one of the kicks so it wouldn’t overlap too much with another. I had to do some of the same changes for the snares as well. I did use the low pass and high pass filter plugins at certain points, but the EQ often helped enough. The one mistake I did make in the beginning, was not realizing one of the Reverb plugins I had in my project actually came from another DAW on my computer. Luckily I realized in time and I replaced it in no time.

 

Automation

There was more automation in my project this time than the last. I used panning for the intro as well as a little bit for the chorus, but it isn’t very strong. There is also some slight panning to the right when there is the “yeah” adlib, but it is the only recording I added any panning to. I did not use my signature ping pong effect panning. I mostly used automation to control the volume level. For example, the very sharp cutoff of the intro, or how the music fades out in the very end.

Errors and technical difficulties

This time I had far fewer technical difficulties than the last time. I only had one big one. My Waveform project crashed while I was working on it and when I reopened it half of the work I had done was gone. Some tracks were deleted, some automation, and a lot of plugins. I did save my work prior to the crash. I always get scared of things like this so I save my project after I make even the slightest change, but I think the problem had to do with my computer. My computer is old and it has almost ran out of storage so I believe it was a problem on my end and it had not much to do with Waveform itself. I was able to put everything back together, but it took a while and it was a pain. In the end not much else went wrong so it wasn’t terrible. My mac kept sending me this:

 

Project

 

My Reverberating Waveform Project

Due to technical difficulties, I am unable to post on the course blog. Instead, I wrote my blog on a google document you can access here:

https://docs.google.com/document/d/1S1Ewt3FyhMPvJ6ZOQ2pzAAWNEgZuVPylrNsLvfLKPko/edit?usp=sharing

I am very sorry for the inconvenience.

Iphone 6s Microphone Test

Research

Unfortunately, Apple does not release information or specs of their devices on their website. There are third parties websites with what seem like credible information, but even then the Iphone 6s is an outdated model that can be perceived as obsolete face to the new Iphone models. What I can say, is that the Iphone 6s has 3 microphones: one on the back of the phone by the camera lens, one on the front of the phone by the camera lens, as well as two on the bottom of the phone (on the left). The phone, according to Apple, should have somewhat of a flat frequency response from 10Hz to 10kHz, for recording speech. I was unable to find any other pertinent information.

Record

The app used to record the sine sweep was: Auphonic Recorder

The app allows the user to:

  • Turn the gain off or on
  • Choose which microphone utilize
  • It allows a choice of format (AAC audio or PCM/ WAV audio)
  • Choice of precision of recording (16 or 24 bits)
  • Allows a sample rate of up to 48kHz

For this Recording the specific setting chosen were:

  • Gain: off
  • Format: wav
  • Sample rate: 48 kHz

The program I used to analyze the sweep was Oceanaudio. I analyzed a 15 second recording on FFT with a linear scale.

At first, I used my BOSE speaker to record the sine sweep with my phone about a foot from the microphone (The volume of the speaker was actually pretty loud). After using Oceanaudio to analyze the recording I got this:

[could not manage to uplaod graph]

According to the FFT analysis, we can see that the frequency response of built-in Iphone 6s microphone (bottom one) seems to be not very flat at all after passing the 1kHz mark. Before then, the frequency response is quite flat. However, after passing the 10k Hz range there are significant dips in the frequencies (for example at 6kHz, 8.5 kHz or 11 kHz, etc…). I wanted to try recording again but at a closer distance of the phone to the source to see if the dips were caused by outside faint sounds.

I did another recording, this time putting my phone between by Sennheiser headphones so that the Iphone 6s microphone could be as close as possible to the source. This time the FFT analysis provided this:

[could not manage to uplaod graph]

Here we see by actually moving the phone closer the frequency response of the sweep is flatter from 100 Hz to 9kHz but dips down significantly after 10kHz.

Report

The Iphone 6s is an older Iphone model which does not reproduce sound as well as the newer models and as other phones, when recording at a distance it is inaccurate. It does however reproduce sound very well when the sound is very close and in the 10 Hz to 10 kHz range. This shows how the Iphone 6s is not a phone meant to record music or singing, it is strictly a phone created to talk and used to call and clearly hear people on phone calls (speech is very much in the 10 Hz to10kHz rang. This all makes sense for an older model Iphone giving up microphone recording capabilities at various frequencies in order to capitalize on the frequencies used to communicate through phone calls.

CSIRAC: The First Computer to Play Music

When talking about Computer Music, where else to begin than the very first computer to produce music. The CSIRAC (Commonwealth Scientific and Industrial Research Automatic Computer) was Australia’s very first digital computer, and world’s fifth stored program computer. It was constructed by the CSIR (Council for Scientific and Industrial Research) by a team under the supervision of scientists Trevor Pearcey and Maston Beard. In 1949 it ran its first program and less than 2 years later it played music for the first time.

Picture of Trevor Pearcey next to the CSIRAC (1952)

Initially known as the CSIR Mark 1, the CSIRAC was in its time, at the forefront of technology. It was fast, efficient, and stored a lot of memory. Today the CSIRAC pales in comparison to the advances in computer technology achieved since then. It was a computer that needed to take up the space of an entire room, and by today’s standards was actually slow (1,000 cycles per second) with not very much memory at all (about 3KB of disk memory). To input information into the computer, a form of long hole punched paper was used. These holes would be converted later into text on another machine. Computer scientists added an output speaker to the machine that would make a sound once the computer had finished running the program letting them know the task had been completed. The machine’s output sounds are indeed the sounds that were later used to create music, but making music using them would prove to be a difficult challenge. 

 

Close up of valves, part of CSIRAC Computer

The CSIRAC used mercury acoustic delay lines meaning that a pulse (originally being inputted in the machine using the hole punched paper) would be sent into a memory tube that would then travel back in forth in the tube permitting storage for a number of bits and digital words in a single tube (about 20 memory tubes could function at any time). Unfortunately, musically this posed a problem. Not only was the computer processing information slowly and delayed, but each mercury tube accessed memory at a different time, therefore making it extremely difficult for any task to be performed in a time-crucial manner such as playing music.

The first to put the CSIRAC to the musical test was George Hill, a mathematician with a strong musical background; his mother was a music teacher, his sister a performer and himself possessing the rare ability of perfect pitch. This would prove useful when attempting to create music with the CSIRAC, a machine creating sounds by sending raw pulses through the computer and to the output speaker. The pulses sent would need to be carefully programmed in order to avoid random jabbers of sound. Hill realized that to create music, he would need to send the pulses in a structure to produce a steady pitch. Naturally, this task was easier said than done, considering that each memory access took a different amount of time.

After a long period of trial and error, Hill was finally able to use the CSIRAC to create tunes from already existing songs, such as Colonel Bogey or Girl with Flaxen Hair, and hence become the first computer to play music.

CSIRAC’s rendition of the Colonel Bogey tune

Unfortunately, none of the tunes played were recorded at the time. Since then the CSIRAC use was redirected to other areas and sciences, and was no longer used to play music. However, in 2018 the University of Melbourne used similar hardware to that of the CSIRAC (same mercury tube system) in order to run the old programs. Scientists then read all of the hole punched paper and input all of the data in Hill’s work, which did in fact end up reconstructing the pulses accurately and playing the reproduced tunes. 

The work of the CSIR and George Hill helped discover a whole new way of making organized sounds and tunes. Although Hill did not create an extensive library of music or create new pieces as a composer using the CSIRAC, he was still able to create something that had not been done before, leading to the work of composers such as Lejaren Hiller (Illiac Suite, 1957) or Max Mathews who started looking into computers as a true musical instrument.

Video with additional information about CSIRAC and some footage of the machine and the scientists involved in the work at CSIR

Information Sources:

Busch, Paul. “Pearcey Interview about CSIRAC Music.” YouTube, uploaded by pauldoornbusch, uploaded 28 September 2009, Accessed September 9 2020. https://www.youtube.com/watch?time_continue=261&v=slr75sLhOCs&feature=emb_logo

“CSIRAC: Computing in Australia begins with CSIRO.” CSIROscope, Accessed September 9 2020. https://blog.csiro.au/csirac-computing-australia-begins-csiro/.

“CSIRAC.” Wikipedia, Wikimedia Foundation, Accessed 9 Sept. 2020. https://en.wikipedia.org/wiki/CSIRAC.

“History of Internal Memory Technologies.” Memory and Database, November 18 2006, Accessed September 9 2020. http://memoryanddatabase.blogspot.com/2006/11/history-of-internal-memory.html

ABC Science. “CSIRAC – The first computer to play music

” YouTube, uploaded by ABC Science, uploaded 6 May 2015, Accessed September 9 2020. https://www.youtube.com/watch?time_continue=261&v=slr75sLhOCs&feature=emb_logo.

“CSIRAC, Australia’s first computer” The Science of Everything: COSMOS, August 3 2018, Accessed September 9 2020. https://cosmosmagazine.com/technology/csirac-australia-s-first-computer/.

“CSIRAC – Colonel Bogey” YouTube, uploaded by TortoiseWrath, uploaded 1 Feburary 2016, Accessed September 9 2020. https://www.youtube.com/watch?time_continue=261&v=slr75sLhOCs&feature=emb_logo.