Waveform Final Project: Masks

With this final composition, my main composition goal was to create a bittersweet piece that I myself would return to if I ever feel a little down in the future. From a technical perspective, I aimed to utilize all the skills I had acquired from the Supercollider module into my song.

Currently, my default “simp song” is Pressure by Draper.

This song’s balance between sadness and motivation works wonders with encouraging me to carry on in tough times. Musically, I’d pinpoint the reason behind its effectiveness to the fullness of it’s sound. Before taking CPSC035, I never noticed all the harmonies and pads in the background that fill in this song’s blank space; I always just heard the lead, bass, and percussion. Though I am still unable to pinpoint the exact number of tracks and their respective notes throughout the song (although I suppose that’s also sign of a good pad), I certainly planned to have ample padding in my musical composition. Another aspect of this song that I wanted to carry over to my song was the tranquil break between powerful drives.

With respect to the more technical side of this song, there are a few things I learned throughout the SuperCollider module (some are SuperCollider related, others are just me pondering about Waveform in the shower):

  1. You don’t have to find the perfect sample for what you’re looking for. I specifically struggled with this in my first two Waveform Projects, spending hours trying to find just the right Kick, Snare, and Hihats, before being left dissatisfied an just using the 808 and 909 drum kit samples instead. Yet in SuperCollider, the fact that I was able to program base and snare sounds from just simple waves a few effects proved to me that the sample only has to be close to what is desired, after which using an abundance of effects is completely acceptable
  2. Use automation curves for apply effects to certain notes. In my last waveform project, I really struggled with the applications of automation curves because I was never able to quite figure out how to automate effects parameters. Turns out, I have to first drag it into the effects chain before being able to select the specific parameter of automation (I thought you had to make the automation curve first, then map it to a parameter). Now with the ability to automate effects parameters, I was able to selectively apply effects to certain notes. For example, if I wanted to add reverb to only the last notes of a musical phrase, I could use an automation curve to turn the reverb’s wet level down to 0 for all notes except the last note.
  3. Use automation curves to give sounds more character. One really cool thing we learned in SuperCollider is how we could use oscillators to modulate certain parameters on an effect. Because of this, I also aimed to use automation curves to mimic the oscillator effect on some of my parameter plugins
  4. Envelopes: To be completely honest, I didn’t really have a full understanding of how envelopes functioned, and what the difference between adsr and perc envelopes were. Yet through Supercollider, the whole concept of treating an envelope like a time-based function that modifies its input signal really helped me understand. The most helpful was the assignment where we had to create our own subtractive synth: while having to juggle the whole envelope difference between adsr and perc was really frustrating, it undoubtedly helped me understand envelopes in general

The first thing I knew I needed to do in this song was to separate my different percussive instruments onto different tracks. On last waveform song, I decided to use just the multisampler as an easy way to get by with a genre of music that usually had a repetitive bass, so I did not expect myself to do any percussive automation. However, I eventually ran into the problem, albeit too late, that using the multisampler meant that any plugin or effect I wanted to add to one instrument would have to be added to the rest of the percussive instruments. Thus, this time I created separate tracks for the kick, snare, hihat, and claves, allowing me to also set their own filters and automation tracks so that the percussion would have more life.

The 2nd step in my song was to find minor chord progression that would set the tone of the song. The chord progression I ended up going with was c minor -> g minor -> Ab major -> Bb major, which follows the i-v-VI-VII progression. Right away, I wanted to test the capabilities of using automation curves on more than just pan and volume, so I decided to have incorporate a bass drop immediately in measure 10. Yet more than just a gradual buildup in sound, I also wanted have a gradual buildup shift in the EQ. In order to apply this to both my Viola and Low Pad, I used a bus that both the channels fed into, and applied a 4-band EQ with automation in the bus.

When thinking back to what made the bass drops in Draper’s Pressure so effective, I noticed that the main contrast was the amount and level of padding . Yet, since I already used a padding in my buildup, I thought, “Meh, I’ll add another Pad.” Because the point of this padding was to be almost like the lead post-drop, I had placed in the sweet-spot of frequencies that our ears are most sensitive, which is to say around 400-800 Hz. However, having both the Lead and the pad be the same instrument was quite a problem, because the pad had to be adsr but having such a powerful sustaining lead was actually kind of painful to listen to. Thus, in order to emphasize the attack of the lead even more than just adjusting the adsr levels, I added another track built from 4OSC, which had sine and triangle waves within a percussive envelope, repeating the same notes as the pad to give a more distinctive character to the lead.

Despite the many parts of my liquid drums composition that I disliked, one aspect I wanted to keep was the high-pitched secondary melody complementing the lead. Since the doubling of my pad with my lead led to a more boring lead, having these bells as a secondary melody complemented very well throughout the chorus. In this case, however, I enjoyed the sound having the secondary melody being centered.

With both the 4OSC and the high pitch adding character to the low pad, I decided to use another approach to add character to the lo pad: automation (that I described in point 4 in the envelope). I chose to automate the 4-Band equalizer in similar fashion to what I did with the bus during the build-up, but this one was solely on the lo pad (without the viola). I first tried a linear oscillation, but I soon realized that such an oscillation was not good because frequencies are not a linear in nature (the whole 1/wavelength thing), so curvature was necessary for the intended effect. I also incorporated a pan automation that creates something that resembled a little bit like the Doppler Effect.

As mentioned early, I also really wanted this song to incorporate a rest in the middle of the chorus. Though the rest I had incorporated in my last song was alright, I was adamant about being more meticulous about the fade and the build-up beyond just one decrescendo and one crescendo. My central misconception was that quiet did not necessarily mean boring; there was still room for fun plugin automation and effects in the rest. The idea I eventually came up with was to increase the volume of my  Lo pad up to the beginning of each measure, only to have it fade out immediately while also using the hihat to slowly fade out, similar to a delay.

The most challenging part for me was definitely the 2nd bass drop; I wanted to make this bass drop even more grand than the first, so one heartbreaking compromise I decided to make was to decrease the overall volume of first bass drop just a bit to give more room for the second bass drop amplitudes. From the juxtaposition between the first bass drop/ chorus and the rest, I noticed that there was certainty a complementary juxtaposition between silence and sound; since juxtapositions run both ways, I decided that I’d add a beat of silence just before the drop to creative an even more dramatic emphasis on my second drop. In addition to this, I also added drums immediately at the drop, using a percussive drum beat that resembled that of dubstep and hip-hop (since dubstep always has the strongest drops). This section is also where I decided to add panning to create even more fullness to the room. For my high secondary melody, I had each note pan between completely left and completely right. Finally to add even more volume to my sound, this was the only section that I incorporated a dedicated low bass track.

Below is a screenshot of my entire Waveform, and the mp3. I decided to name this song Mask: just as many people go to a mask-like facade during difficult times, I will go to this song.

Final Reflection thoughts:

Overall, I am really proud of this piece; to me, what’s most important was that I could say I really did show my best work here; the largest problem I had previously with filling in silence behind a melody was solved because of the pads I used throughout the song, and at no point did I, as a listener, feel like I was bored. I have yet to have another down-in-the-dumps type of mood since creating this song, but I know that I’ll at least give it a listen when the time comes: just hopefully not too close to finals. As far as further questions and improvements, one that already comes to mind is this issue I commonly have with inconsistent sound levels of Sine wave sounds: when experimenting with pure sine waves I could never get the track to play at a consistent level, and adding more tracks to it when only made the sine wave more and more distorted and quiet. Yet another issue I had was how sometimes there’d be popping and cracking despite the fact that I’m not redlining and that my sound envelopes were not attacking too fast. For improvements, I’d next try to be able to add more audio samples to my tracks that I record, whether that be just a piano or percussive sound effects. When peering at some of the pre-loaded projects on waveform, I noticed that almost none use Waveform MIDI to create sounds, but instead use Waveform for its plugins and for mixing.

 

Waveform Part 2: MIDI Unlocked

My last project, what began as an attempt to create a drum and bass (DnB) piece turned into a prematurely-ended orchestral piece. This time, with the freedom to use MIDI and a few other sound samples I found online, I was determined not to be derailed and actually make a drum and bass piece.

Inspirations

My piece was mainly inspired by two pieces:

“Albion Prelude” by Fatkids was the like the pie to my pizza. Like other DnB pieces, this song places its emphasis on a repetitive percussion to drive the song forward, while it’s melodies and harmonies are all highly reverbed to create a dreamy environment. With a difference percussion pattern, also aimed to create a song that had the percussion serve as the driving force and ambient harmonies creating a calming ambience.

From The Glitch Mob’s “The Clouds Breathe for You,” I took the inspiration of using a high-pitched synthetic bell sound as the melody in juxtaposition to the the low frequency bass and kicks.

Drums

Besides not having learned how to use the MIDI interface on Waveform 11, I was also limited on my last project because of the lack of samples I had access to. Last time, I only searched through freesound.org and an Orchestral sound pack I searched for online. This time, was also able to sample sounds from looperman.com, samplefocus.com, and numerous GitHub repositories. With the exception of the open hi-hat, kick, and snare,  I was able to use the rest of the 808 samples that came with Waveform 11’s drum sampler within my piece: the kick I settled on came from samplefocus.com, while the snare and the open hi hat came from tidalcycle’s Dirt Samples repository on GitHub. As with many other DnB pieces, the drum beat I created was modelled after the Amen Break, originally created by the American soul group the Winstons for their single “Color me Father.” All percussion was panned dead center, as the drums are the emphasis of DnB music. I wanted to add reverb to the open high-hat and claves to add space and room to the song, yet I could not figure out how to apply effects to only one sample in the drum sampler. Thus, I settled adding reverb to the entire sample and cutting shore the kick, snare, and closed high-hat sounds to compensate for the added reverb. A potential solution may have been to use different tracks for each percussion, thus I will try this method on my next song if there is indeed no way to add plugins to individual samples. I also had issues assigning individual dynamics for each note: though I used the velocity bars to change volume for other tracks, the drum sample track’s velocity become much more confusing due to inconsistencies of which velocity bar was mapped to which percussion instrument. Because of the multi-sample nature of the drum sampler, I also couldn’t utilize other plugins because they’d alter the sound of all percussion rather than just one.

Below is a picture of my drums MIDI sample. Each pitch, in descending pitch order, represented the maracas, open hi-hat, closed hi-hat, snare, and kick.

During the break in the percussion, I also played a little bit with panning automation: when fading, I used the automation curve to pan the drums from ear to ear to emphasize the dispersal effect. During the rise, I had the percussion pan from left to right before coming back to center when the beat picked back up. Below is a picture of my drum sample and its automation track during the break.

High Bell Sounds

The sound of high bells were supposed to be the melody of the song. However, I was unfortunately unable to find a bell sample high enough. When I tried using a sampler that would take a regular bell sample and pitch shift, it resulted in an unsatisfactory timbre and no longer sounded like a bell. As such, I used the saw wave in subtractive to artificially create the sound of the bell to the best of my abilities. I had planned for the bells to slowly fade in, as well as have the bells pan closer to the middle every time they repeated. However, I ran into some issues with distortion. I was able to fix this by changing the subtractive wave pattern from saw to triangle, but this did result in having two tracks of the melody. When applying reverb, I put both of these tracks into an audio bus so as to apply the same reverb to both. Though I decided to add separate compressor filters to each one, I also could have just added a compressor plugin to the bus as well. Using the equalizer, I was able to get cleaner bell sounds by essentially removing the lower frequency wavelengths that I couldn’t quite remove in the Subtractive Plugin.

When comparing the sound output of the bells on different speakers, my headphones picked up the highs much better than my computer speaker. However, when adjusting it to the appropriate balance for computer speakers, the highs them overpowered the DnB on headphones. I settled on mixing to the sound output on my headphones. I am still unsure of exactly how to resolve this issue, as compromising between the two volumes would mean that it sound underwhelming on headphones yet overpowering on computer speakers, leaving both unsatisfied.

The bells are also where I added a delay plugin for an echo effect. However, I only wanted the first few bell melodies to have delay and the main melodies without; luckily, the previous issue I had with panning, which I resolved by creating two tracks, also facilitated the separation of the delay plugin, as I could just add the delay plugin to the first track and not the second. At first, I tried to time the delay to be in beat with the song; however, I unexpectedly liked the off-beat delay even more and decided to keep it.

Below are the tracks for the two high bell sounds, along with the audio bus

Choir & Violin Accompaniment

The choir and violin were the harmony of my song, and I chose these two because of their soft timbre that would contrast the punchy and fast-paced drums. Using a high pass filter and a compressor, I was able to clean up the sound of choir. This was also where I wanted to play the most with panning, as I planned to have different pitches come from different areas in the room. Yet in practice, shifting the source of the harmony was a terrible idea because I distracted from the melody too much. In addition, my harmony had a point of very fast progression, and quickly changing the pan of the sound during that progression was especially confusing as a listener. As such, I stuck with keeping the pan of the harmony in the center. When mixing I realized that the choir and the violin were very similar sounding, so I tried to use differing equalizer filters to try to get both of their unique timbre while still adding more to the harmony. Though this helped to a certain extent, I also do think it’d also have interesting to try using another instrument, perhaps the viola, rather than trying to use filters to force a difference.

Bass

The bass was my favorite yet my most frustrating part of this song to work on. As with my last song, I also use the 4OSC to create my two bass tracks. As I felt like my last track’s bass didn’t have enough character to it, this time I decided to have two basses, one to act more as an accompaniment, track name called BoopBass because it was single hit bass notes, and served as the bass pad, track name Full Bass. I added a low pass filter to BoopBass and an equalizer to to Full Bass just to be save that there were no high-pitched imperfection even though I didn’t seem to hear any difference in my headphones. The first half of the song only had the BoopBass, and I had intended for the bass pad to come in during a drum solo and slowly rise up as the melody returned. Yet, there always seemed to be an inconsistency in the gain of the full bass, with one note playing louder/ softer than the last and also inconsistently so through each play. Since I did not notice the problem when soloing the track, I thought perhaps it was during to the sound wave destructively interfering with other wavelengths, yet that still doesn’t justify why the varying sound levels would be inconsistent with each playback.

Yet another issue I ran into with the bass was the track bass line’s difference in volume when played on different audio devices, just like with the aforementioned bell issue. On my computer’s speakers, the bass was essentially inaudible at low levels, yet when I increased its volume while mixing, the sound would then become painfully overwhelming when listening through headphones. I once again settled on the volume level that would be comfortable with headphones.

Final Product and Concluding Thoughts

Below is my final song. I decided to name it “Edgeless,” after the edgeless effect that I tried to create with the repetitive drum beat and mimicking a large room’s reverb through almost all of my tracks.

In general, I’d say this track was overall a success. I was successfully able to create a DnB song that had the driving percussion, deep bass, and high melody that I set out as my goals from the very beginning. Compared to the two songs that I took inspiration from, there is definitely still a lot of work to do:

  • I could add more variation into my drums. Although much of DnB is indeed repetitive, I felt that my song could have been improved by adding some more automation and a few measures of a different drum pattern. The only real change I implemented was a few measures of rest in the middle. Next project, I will try to add more variation through automation, but first I’ll learn how to add plugins to each individual sound sample in the drum sampler. If this is impossible, I will likely proceed by assigning a track to each percussion instrument instead
  • More use of filters. For this project, I largely relied on the equalizer, compressor, and reverb. For my next project, I’d like to incorporate especially a pitch shifter: this was a technique used in “Albion Prelude,” and I can’t wait to try it on my own music.
  • Perhaps add some vocals? I recently ordered a microphone and I’m hoping that the microphone allow me to use my own voice in my sounds as well. Now that I know how to use filters and other plugins, I may also be able to compensate with a poor audio recording by using filters to create a cleaner sound
  • More FX: There were some measures in the song that sounded very dry, and rather than relying on reverb to fill that sound, I really want to try using more sound effects. Often times, I am dissuaded from using FX because it’s almost impossible to find “the right one.” However, I may sample some FX before starting my next piece, so this way I can perhaps create a song that tailors to the FX that I have on hand
  • Use more bus: when reviewing my final work, I saw had a lot of the same plugins for many of my tracks. On future projects, I’d wish to use more audio buses to both make my production process more efficient as well as better balanced.

 

 

 

 

 

 

 

 

First Waveform Project

What Genre?

Despite the orchestral sound of the final piece, I actually started this project off with liquid drums in mind. Being my favorite type of chill electronic music, I thought composing my own would also help me appreciate my favorite genre even more.

Yet the first roadblock did not take long; after looking through freesound.org and the Imagina drum loops, I was unable to find a satisfying drum loop that fit what I was trying to create (reasonably understandable, liquid drums is not very popular). The few that did have the percussion loop which I desired unfortunately also had their own harmony, which would then limit my create freedom over the piece. Plan B was to create my own drum loop, and then use that loop within the track. However, two more roadblocks arose. First, my continued search on freesound.org and Imagina, this time for just percussion sound effects (kicks, hihats, and snares), also resulted in disappointment: many had too much background noise, or the timbre of the percussion did not fit the liquid drums style. In addition, I was also unable to figure exactly how to use the multisampler plugin to make a drum loop.

Thus, with neither option of finding a loop nor making my own, I decided to move on to another option: emulating orchestral music through electronic music. I was luckily able to find an orchestral instrument sample pack online, “Virtual-Playing-Orchestra3-1-wave-files” accessed from  http://virtualplaying.com/virtual-playing-orchestra/, which had suitable samples of almost every orchestral instrument, among which I picked the trumpet, flute, and trombone. Though I was also keen on perhaps adding some vocals, my singing sounds like a horse dying. Instead, I chose to whistle. Luckily, my residential college (Saybrook) has a small room next to our common room with outstanding acoustics.

 

Recording Whistles

Creating a melody for me to whistle was the easy part; I ran in to the most trouble trying to perfect the sound quality of my whistles. As a consequence of outstanding reverb in this room, the airy sound that accompanies whistling was also amplified. When listening to my 1st take recording, the airy sound almost overpowered the whistle itself. On my second try, I decided to move further away from the microphone (with respect to the image above, I placed my phone on the left armrest of the black bench in the bottom left corner, and whistled from the position of the camera). Unfortunately, the airy sound still had not been reduced to a satisfactory level. In addition, the microphone was oriented such that I sang from one side of the microphone, producing a panned audio recording. On take 3, I placed the microphone at the position of the camera while I ran to the 3rd step of the stairs and whistled facing away from the microphone. This time, I made sure to have the microphone facing my direction. Though there was still was noticeable airiness in the recording, facing away from the microphone did help reduce much of the noise; the remaining airiness was minimized using filter plugins in Waveform.

Playing with Waveform

As Bob Ross once said, “There are no mistakes, just happy accidents.” Turns out my stereo pan error in my Take 2 recording became an inspiration. After replaying the Take 2 track Waveform along with some ambient sounds I found on freesound.org, the stereo pan created the atmosphere of a very large room mimicking that of a concert hall. Since the Take 2 track still had the airy sound, I decided instead to use the Take 4 track as I had intended, but this time also utilize the stereo pan plugin to pan the sound to the left. Then, I used an automation track so that the whistle would slowly move towards the center through the introduction.

After the high-pitched whistling then came the juxtaposing low frequency bass and kicks. I was able to create a very deep bass by using a combination of the 4OSC plugin to generate the sound, then using a low-pass filter on top of that to only allow the low frequencies to pass. Along with the low-frequency bass, I also was able to use the low-pass filter on a previously unsatisfying kick sample. The low-pass filter was able to remove most of the noise on the kick, resulting in a very clean kick.

After the whistle and the bass, it was time for the orchestra. After learning how to use the filters and automation in class, I was able actually able to modify and clean up many of the percussion samples that I had previously ruled out. In particular: I fell in love with a timpani sound after applying a rapid stereo-pan left-to-right. I decided to use a roll of the timpani as part of the buildup to the orchestral melody. Within this orchestral movement, I used trumpet as the main melody, flute as supporting melody, and trombone as the harmony/ bass. While both brass instruments were panned center, the flute was panned to one side to add more character to both the room and the orchestra: since the flute’s timbre had a texture that contrasted well against the brass timbre, panning the flute to only one side seemed to make the contrast even more appreciable.

Below is the mp3 of my final composition:

Concluding thoughts:

With respect to the original liquid drum composition that I had in mind in the very beginning, the only idea that actually made it into the final track was the ambient background sound. Beyond that, all the other sounds (the whistles, timpanis, trumpets, trombones, and flutes,) were all the products of happy accidents. In particular, the implementation of the panned whistling would not even have happened should I not had made the mistake of not orienting the microphone towards me. As I did not fully master how to utilize the pan, filters, and automation capabilities until near the deadline of the musical composition, I was unable to explore the potential applications of these plugins to the extent which I’d have wished to do. With our next Waveform Project coming up soon, I’ll certainly be using more plugins to enhance my music productions.

Samsung Galaxy Note 9

Samsung’s Galaxy Note 9 has two microphones: one at the top of the phone and another at the bottom of the phone. However, Samsung’s default Sound Recorder app was only capable of recording using one microphone instead of both. Unfortunately, the sample rate and the bit depth of the phone’s microphone was unable to be detected. However, when RecForge to record audio, I was able to use the settings to record at at 48khz at bit rate of 24, thus it is assumed that the Galaxy Note 9 have at least these specifications.

Audio recording apps that I had tried to use included Samsung’s default voice recorder, True voice recorder, and finally RecForge. As mentioned above, Samsung’s default application only utilized one of the speakers. It was determined that, after doing a mic-covering test individually on each mic, that the application only used the bottom mic. A similar test was conducted on True Voice Recorder. Though this application allowed me to adjust the Audio Gain, it nonetheless only recorded through one microphone. RecForge, however, was able to utilize both top and both microphones on the phone, which I tested by moving the phone above and below a sound source and tested if playback matched the anticipated movements.

 

Audacity was used to generate both sounds. The 15 second rising pitch from 100 Hz to 18 kHz was was accomplished by using Audacity -> Generate -> Chirp function. The 15 second white noise was generated by navigating to Audacity -> Generate -> Noise -> White Noise.

 

My computer, the 2019 Razer Blade Stealth, had stereo speakers with a speaker to each side of the keyboard. When recording the sound onto my computer, I placed the phone just above the middle of the keyboard, equidistant from each speaker. The phone’s top speaker faced the left side, while the phone’s bottom speaker faced the right side. It should be noted that this orientation should not have affected the sound much, as the sounds generated on Audacity were both mono.

 

After recording the audio on my Galaxy Note 9 using RecForge in the .wav format, I emailed the files directly to myself and downloaded them onto my Razer Blade Stealth for analysis.

FFT analysis of original 100Hz-18kHz sound file
FFT analysis of recorded 100Hz-18kHz sound file

 

 

 

 

 

 

 

 

 

From the two illustrations above, we see that the compared to the original file, the the recorded audio file had less amplitude, with the former maxing out at -12.5 dB and the latter at -24.5 dB (higher decibel = louder). In addition, we also see the maximum amplitude of the original audio file occurs at 150Hz, while the maximum amplitude of the original recording recorded a maximum amplitude at 600 Hz. This data seems to suggest one, or a combination, of three possible conclusions

  1. The Galaxy Note 9 microphone is more sensitive to higher frequencies that is it to lower freqencies
  2. The Razer Blade Stealth’s speakers are better at projecting higher frequencies that is it at lower frequencies
  3. The medium (air) is better at transmitting Higher frequencies than lower frequencies

The third conclusion is least likely to be a factor, as-frequency sounds are generally better at traveling through medium that high-frequency sounds.

For further analysis of the potential cause of this difference, we now perform a side-by-side comparison of the white noises.

FFT analysis of original white noise file
FFT analysis of recorded white noise file

 

 

 

 

 

 

 

 

 

From analyzing the white noise graphs, we see that the peak amplitude on the original white noise graph was -28.9 dB, while the peak amplitude on the recorded white noise graph was at -44.2 dB. However, the frequencies at which the peak sound was heard are remarkably similar, at 365 Hz and 371 Hz respectively. Unfortunately, the lack of peak amplitude difference between the original and recorded audio presents us with no information that could help us diagnose which of the three hypotheses could be contributing to the difference in frequency of the maximum amplitude.

 

Another point of interest for us to analyze would be how distance between the microphones and the speaker changes the maximum amplitude and the frequency at which it was observed.

FFT at 4cm
FFT at 4cm
FFT at 8cm
FFT at 8cm
FFT at 12cm
FFT at 12cm
FFT at 16cm
FFT at 16cm

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Above are the FFT analysis of the audio recordings on the Galaxy Note 9 at 4 different height above the computer keyboard (in the same orientation as described above). Their respective peak amplitude at corresponding frequencies are as follows: -26 dB at 7000 Hz, -34 dB at 4000 Hz, -25.3 dB at 7000 Hz, and -25.7 dB at 7000 Hz. After considering our 8cm test as an outlier (the dB and frequency are definite outliers compared to the other data points), there seems to be no correlation between distance from speakers, maximum dB, and maximum frequency. However, with respect to our original three hypothesis, this does indeed rule out the possibility that the different frequency observed is due to the medium: even with increases in medium distance, there did not seem to be any difference in frequency at which the maximum amplitude was recorded.

 

Overall, when comparing the first two graphs, I am convinced that the Galaxy Note 9 has quite a flat frequency response. In the first two graph comparison between the original audio recording and the recorded audio of the 100-18000Hz sine sound wave, the approximate shape of the two graphs were quite similar, with the only difference between the two being that the latter had greater inconsistencies (dips and ridges) in the peak decibel of each frequency. This is likely due to environmental interference, such as sound waves bouncing off the walls in my dorm room. In a future analysis, it would indeed be ideal to complete the experiment again in a sound-absorbing environment, such that it reduces the environmental effects on the sound waves picked up.

Bell Labs, Miller Puckette, and the Iliac Suite

Where: Bell Labs

Nokia’s Bell Labs is a research institute dedicated to using “telecommunications and information technology” to help “people connect, collaborate, compute, and communicate.” Founded by Alexander Graham Bell in 1925 using the money he obtained the French government through the Volta Prize, Bell Labs has numerous revolutionary products come out of its lab:

  • In the early 1930s, Jansky accidentally discovered radio wave sources from the Milky Way at Bell Laboratories while investigating the interference of transatlantic voice transmissions. 
  • In 1947, John Bardeen and Walter Brattain discovered Transistors when observing that the output power exceeded the input power when gold points were applied to germanium.
  • In 1957, Charles Townes and Arthur Schawlow built upon Joseph Weber’s microwave amplifier to produce the first visible light laser. Townes and Schawlow would file a patent that was disputed by Columbia University for 28 years, after which Columbia was given the patent lawsuit victory. However, the invention of lasers is still widely credited to Bell Labs. 
  • In that same year, Max Mathews wrote MUSIC-N, the first widely used program for sounds generation. MUSIC involved a graphic interface, upon which users could draw using a light pen to create sounds to be used in computer music
  • In 1975, Bell Labs sold its first source license for Unix to Donald Gillies, and professor at the University of Illinois at Urbana-Champaign’s computer science department. 

 

With the COVID-19 pandemic keeping the world alert for future pandemics as well, one of Bell Labs’ current projects is to use lasers to detect changes in a human’s biophysical condition before traditional symptoms begin showing.

Who: Miller Puckette

Miller Puckette is a professor of music known for his creation of the programming language Max, which aimed to help provide a more user-friendly environment for artists to produce electronic music. After graduating from MIT in 1980 and earning a PhD in mathematics from Harvard in 1986, Puckette relocated to the Institute for Research and Coordination in Acoustics/Music (IRCAM) in France, where he created Max. Max was the first real-time audio processing programming environment and designed for the Macintosh, though it did initially have to rely on external hardware when Puckette first created it. It wasn’t until four years later, in 1989 that Puckette developed Max/FTS (Faster than Sound), which finally allowed for real-time audio processing. With lRCAM licensing this updated software to Opcode Systems shortly after, Puckette moved on to create another digital synthesis program called Pure Data by 1996: a program that still works with modern day operating systems including Linux, Mac OS X, Android, and Windows.

Puckette then moved on to be a part of the Global Visual Music project, sponsored by Intel, to continue developing Pure Data into a more friendly real time audio software by using graphics. Yet due to the computational limits of the time, the goal of rendering three dimensional graphics in real time was considered nearly impossible. Since 1994, Puckette has been a professor of music at UCSD.

 

What: Illiac Suite for String Quartet by Lejaren Hiller

Commonly regarded as the first piece of electronic music, Hiller’s Iliac Suite for String Quartet was the result of an experiment done with Leonard Isaacson. Originally a chemist who frequented with computers for his work, Hiller noticed through his analyses of music that there exists “laws of organization” that seemed to govern music such as chord progressions, scales, and triads, and that the “organizational choices” that composers made could be done by a computer as well.

The creation of the Iliac Suite involved three steps: initialization, generation, and verification. During initialization, Hiller defined the conditions under which the computer had creative freedom to create its own music, such as the “prohibition of consecutive fifth and octave parallels.” During the second step called generation, the computer was given freedom over a number of musical parameters such as dynamics (crescendo and decrescendo, arco and pizzicato) and rhythm, with 1000 notes generated per second. The notes were generated according to the Monte Carlo method algorithm. In the verification step, each generated note was compared once again with the set of rules, after which the next note would be verified. Invalid notes which broke the guidelines set would be generated once again. It should be noted here that the first two steps proceeded mathematically, in which the rules of composition refers to certain numerical patterns and the generation of notes actually refers to the generation of random numbers in accordance with the Monte Carlo method. Each verified note is stored in memory. At the end, all notes are then translated into musical notation in order to be performed by real-life instruments.

 

Sources:

Di Nunzio, Alex. “Illiac Suite.” Musica Informatica, 2011, www.musicainformatica.org/topics/illiac-suite.php.

H. Viswanathan and P. E. Mogensen, “Communications in the 6G Era,” in IEEE Access, vol. 8, pp. 57063-57074, 2020, doi: 10.1109/ACCESS.2020.2981745.

Hiller, Lejaren. “Experimental Music; Composition with an Electronic Computer : Hiller, Lejaren, 1924-1994 : Free Download, Borrow, and Streaming.” Internet Archive, New York, McGraw-Hill, 1 Jan. 1970, archive.org/details/experimentalmusi00hill.

Ionescu, Maria. “Nokia Bell Labs: AI Is Being Fitted for a Diving Suit: How Reinforcement Learning Will Power the Transoceanic Cables of the Future.” Bell Labs, 15 May 2020, www.bell-labs.com/var/articles/ai-being-fitted-diving-suit-how-reinforcement-learning-will-power-transoceanic-cables-future/. 

Mandelbaum, Ryan. “Miller Puckette: The Man Behind the Max and Pd Languages (and a Lot of Crazy Music).” IEEE Spectrum: Technology, Engineering, and Science News, 21 Feb. 2017, 20:00, spectrum.ieee.org/geek-life/profiles/miller-puckette-the-man-behind-the-max-and-pd-languages-and-a-lot-of-crazy-music. 

Puckette, Miller. Miller Puckette, msp.ucsd.edu/.