My Second Attempt At Waveform

While I usually prefer using a drum sampler to lay down the base of my track, I decided to do things differently and used the subtractive instead. 

An overall view of my project

Using the chords marker, I created a four bar progression. In order to select the chords, I used Waveform’s Suggestions tab within chord options. All of my tracks stuck to the chosen chord progression. 

Suggestions Tab

In all the tracks, instead of manually selecting the MIDI notes, I decided to use a pattern generator. For the base track I picked a chord pattern and set the style to Deadmau5 plucks. My inspiration for the project was Swedish House music. The track does have properties of SH music, however I feel that it lacks the speed and is a slowed down version. This problem could have been improved by using a better instrument type for the subtractive, however the best I could find in the given preset was Bass n Lead, so I decided to stick with it. 

 

Subtractive

 

Chord Pattern Generator

Similarly for the other tracks making use of a subtractive, I once again used a pattern generator. However it was different each time ranging from Bass-line, Arpeggio and Melody. 


In order to create a continuous rhythm at the back of the track I used the micro-drum sampler. Using the step clip, I layered a few drum samples, and rendered the clip for repeated use. 

Rendered Clip
Step-Clip

I decided to use reverb on the chords, because while listening to ‘rave’ music I realised that the start of the track always has music which sounds like it’s being produced from far away. Since I was starting off with the chords and wanted a linear progression for them throughout the track I decided to use reverb on the whole track. 

 

For the bassline, I used a delay, setting the length to 150ms. I wanted the bassline to ‘tick’ instead of ‘buzzing’. Hence, I made the delay shorter, instead of having a long delay. 

Delay settings

In comes EQ. In the Waveform Artisan Plugins, I found a plugin labelled as EQ. While this was not a plugin designed to give a technical EQ experience, it allowed me to efficiently filter the drums. I set the treble high, to ensure that while the drums stayed in the background they still created an encapsulating atmosphere for the listener. 

EQ settings

This is an mp3 version of my project:

 

How can a track be ‘neat’ and ‘un-neat’ at the same time ?

When it comes to producing music, I have a difficult time mixing and matching different kinds of sound samples. The problem with using sound samples is that the internet is filled with them. My point being that there are too many sound samples available to sift through. I encountered this very same problem while working on the HW3. Hence I decided to go through with a shortcut. 

 

BandLab is a music production software for beginners which allows you to record, produce and master your own tracks. In fact I’ve been using BandLab for quite some time now. Not wanting to go through the large number of sound samples available on freesound.org, I logged into my BandLab account and utilised the ‘Looper’ tool. The ‘Looper’, as the name suggests allows you to create your own tracks by using pre-existing loops within the software. However, trying to stick to the homework guidelines, I didn’t compose the whole project in a single BandLab project. I first created a base track consisting of only drums and bass. Once this one was done, I placed hi-hats and percussion onto another track while listening to the base track in order to ensure that I created a steady rhythm. I repeated this process of establishing a separate track once again in order to add guitar and synth and voilà my sound samples were ready. 

 

While arranging the tracks onto the DAW, I decided to make the intro coarse. A ‘cement-pipe’ sound followed by hard drumming made it sound more like an alternative rock track and less like the theme to be established (LoFi). I wanted the drums to sound loud, however at the same time I wanted to create the impression of the drums being played in a room, like a basement. This is where I decided to plug in the Reverb. I played around with the ‘Room Size’, ensuring that the adequate perception was delivered to the listener. As far as hi-hats and percussion were concerned, I didn’t want them to be the main highlight of the track. I wanted them to have a subtle effect at the back of the head. Hence, I decided to add a little bit of Reverb to the hi-hats as well. It was enough to ensure that the hi-hats stayed in the background without the drums completely undermining them. My aim from the very beginning was to make the guitar the main highlight of the track. Luckily I stumbled upon a plugin called ‘Iron Oxide 5’ which ‘boosted’ the guitar track and made it stand on top of all the noise at the back. One could say that the ‘Iron Oxide 5’ plugin worked as an ‘anti-damper’. To enhance the strumming of the guitar I also used a Delay plugin (length = 150ms), which gave it a longer lasting effect. 

 

The fiddly controls of the user-interface caused me to have a difficult time while trying to instil animation onto the track. Animation was mainly used on the audio recordings – a vocal and a bracelet-sound. Trying to do something creative, I used a bracelet to make a sound by crushing it in my hand. By producing a series of these sounds I tried to line them up with the hi-hats and percussion to give an ‘un-neat’ perception. While I was trying to make the track less neater, I still used a compressor to smoothen the bracelet-sound. At this point I was literally making something neater to make the track less neater – the wonders of music production I guess ? Last but not least, I recorded myself singing the word ‘hey’ at a stretch. Applying a pitch shifter and a phaser animation I tried my best to line this up with the synthesiser to create a harmony – but to be fair I failed badly ( it was worth a shot I guess). 

 

Critically speaking, I think that I could have done better. While fairly composed the track does have its cons which could be fixed with better mixing and proper implementation of plugins and animation.

 

 

Samsung Galaxy A50: A Tale of Two Microphones

Smartphone Specs

The Samsung Galaxy A50 is equipped with two microphones – a front microphone and a back microphone. Both of these microphones can be used for recording (depending on the software in use). If Samsung’s built-in software called ‘Voice Recorder’ is used, only one of the microphones is utilized. The other microphone is meant to boost noise cancellation. However, softwares like RecForge II – Audio Recorder give you the option to choose between the two microphones. The Galaxy smartphone also offers stereo input options. Stereo input options can be accessed via RecForge’s ‘Audio Record’ settings. The options available are Mono and Stereo(Mono x 2). The maximum sample rate for both microphones is 48kHz. Due to insufficient information on smartphone specification websites, the audio samples were loaded on QuickTime player and were inspected using ‘Movie Inspector’. Movie Inspector showed the format of the audio samples as ‘16-bit little-endian signed integer’ from which it can be deduced that the bit depth is 16.  

QuickTime Movie Inspector Window

 

Audio Recording Applications 

In the pursuit for an Android recording app which records uncompressed audio in .wav and disables AGC, one may stumble upon the following: Tape Machine, Smart Recorder, and RecForge II. 

While Tape Machine has a heavy fan-base on developer blogs and forums, it isn’t available from download anymore. It was hailed for having the AGC disablement feature. 

Smart Recorder is best at what it’s meant for – recording audio. Though it allows the user to disable AGC and records in .wav, the sampling rate maxes out at 44.1Hz. Apart from this it also doesn’t offer many basic audio editing features. 

In comes RecForge II – a well designed, user friendly software, allowing users to disable AGC and set a sampling rate to the maximum limit of their hardware. The application supports nearly all audio formats and offers a range of various editing tools including (but not limited to) Acoustic Echo Canceler and Skip silence. Going forward, RecForge II has been used alongside the frontal microphone to record all audio files. 

Settings interface on RegForge-II

Generation of Sine Sweep and White Noise

The software used was Audacity. First, In Audacity’s Generate menu a Chirp was produced in the shape of a sine wave for 15 seconds. Next, using the same Generate menu, the Noise plug-in was used to generate 15 seconds of white noise. 

Recording Using Phone Microphone

For each 15-second sound, 3 recordings were made using the phone (six recordings in total). For the first recording the phone was placed on the laptop’s keyboard. In the second recording the phone was kept approximately 8 cm away from the laptop’s keyboard. Finally, in the third recording the laptop was placed inside a cupboard (a restricted space) and the phone was placed on the laptop’s keyboard. Gain used for all recordings was +9.0dB. 

Analysis and Understanding

Using ‘plot spectrum’ function in audacity, the following graphs were plotted for sine sweep.

All of the sine sweep graphs have one thing in common – they consume a healthy portion of the plot. The first plot is consistent and flat in terms of the relative responsiveness (expressed in dB). However, when a phone is used, the plots change and a deformed shape emerges. A change in the range of dB clearly shows that there has been a change in sound quality. When the phone is closer to the keyboard, a greater number of peaks are observed at higher frequencies. In a way we could say that being close to the keyboard ‘boosts’ the responsiveness of the microphone (same goes for enclosed locations such as cupboards). If you listen to the recordings, the recording done on the keyboard is much louder and clearer than the recording done 8cm away from the keyboard. However, the recording done in the cupboard is a louder, amplified version of the recording done on the keyboard. This can clearly be seen in the graph as at higher frequencies the 4th graph has more peaks than the 2nd graph. 

Using ‘plot spectrum’ function in audacity, the following graphs were plotted for white noise.

Compared to the sine sweep plots, the white noise graphs consume less area on the plot. The first white noise plot has various peaks however they are much lower than the flat crest observed for the initial sine sweep. By the introduction of taller peaks in the phone recorded graphs we can observe that the phone recording has severely impacted the sound quality of the white noise. Such a drastic change in peak height was not observed in the plots for sine sweep. For white noise, the change in loudness is the same as sine sweep in the sense that the more enclosed the environment and the closer the microphone to the sound source, the louder the recording. This can be observed by how the number of peaks at higher frequencies tend to get more in number and taller as the microphone is placed in a cupboard or is moved closer to the sound source. 

To draw a conclusion we can say that the loudness of the recording is inversely proportional to the size of the room and the distance of the microphone from the sound source. However at the same time the loudness is directly proportional to the number and height of the peaks at higher frequencies.  

As far as the spectral flatness of the microphone is considered overall, we can say that the microphone produced ‘shaped’ curves. This suggests that the microphone is more sensitive to certain changes in frequencies than others. This can be seen by the fact that lower frequency areas tend to have fewer peaks, whereas higher frequencies portray more peaks, clearly showing that the microphone is more responsive to higher frequencies (typically above 3000Hz, as seen on all graphs). 

Recordings: https://drive.google.com/file/d/19jfF4ezgNVOPLkGrbufHGzcD6oLoKwju/view?usp=sharing

Sources
https://forum.xda-developers.com/showthread.php?t=1042051

https://www.shure.com/en-US/performance-production/louder/mic-basics-frequency-response

https://filmora.wondershare.com/audio-editor/best-voice-recording-apps-android.html

https://support.ebird.org/en/support/solutions/articles/48001064305-smartphone-recording-tips

https://www.neumann.com/homestudio/en/how-does-frequency-response-relate-to-sound

https://www.gsmarena.com/samsung_galaxy_a50-9554.php

https://www.thepodcasthost.com/recording-skills/best-audio-recording-apps-for-android/

 

 

 

Alan Turing: The ‘father’ of a pre-modern digital music renaissance

Alan Turing is undoubtedly a voice that continues to echo innovation and inspiration in the worlds of invention and mathematics. Benedict Cumberbatch’s portrayal of Turing in “The Imitation Game” allowed the layman to understand the workings behind one of Turing’s renowned inventions – the Bombe – through a more sophisticated narrative. While Turing is hailed as a mathematician, logician, computer scientist, cryptanalyst, philosopher, and theoretical biologist, his contributions to the world of computer music have been highly overlooked.

Alan Turing, 1930s. Found in the collection of Science Museum London. Artist Anonymous. (Photo by Fine Art Images/Heritage Images/Getty Images)

Turing’s contributions to the world of computer music took rise in Manchester in 1948. By now,  Turing  had  already  thought  of  a  ‘type  of  machine  which  had  a central  mechanism and  an  infinite  memory  which  was  contained  on  an  infinite tape’.  Working side by side with Tom Kilburn in the Manchester Computing Machine   Laboratory, Turing   was   able   to   manufacture   and   program   the Manchester Mark 1 –  a large -scale computer.  At this point, Turing was yet to stumble upon the concepts of computer music.

Manchester Mark 1

The Manchester Mark 1 had a loudspeaker which Turing called the ‘hooter’. The loudspeaker  was  initially  installed  to  alert  the  computer’s  operator  in  the  case something  went  wrong.  Soon Turing realized that if correctly programmed, the speaker could produce musical notes.

When  the  ‘hoot’  sound  was  produced  by  the  computer,  it  lasted  for  merely  a second. However, as Turing tried executing the ‘hoot’ sound repeatedly, he noticed that  a  symphony  going  ‘tick,  tick,  tick,  click’  had  been  composed.  Further experimenting  with  the  ‘hoot’  sound,  Turing  realized  that  if  the  sound  was produced in different patterns, different musical notes could be produced. Simply put, Turing had come across a 21st century music pad, where if played around with the notes for a while, he could easily compose a symphony.

Inspired by Turing’s work, Christopher Strachey, a colleague of Turing, showed up to the Computing Machine Laboratory in 1951. Strachey wanted to use the musical notes Turing had come across in a complete composition. Turing tutored Strachey on  how  to  program  the  machine  for  musical  notes  and  gave  him  a  ‘typical high-speed,  high-pitched  description  of  how  to  use  the  machine’.  Continuously debugging and programming the machine, Strachey was able to make the machine play the British National Anthem.  It was these events which put the wheels of Strachey’s career in motion, allowing him to become Oxford’s first professor of computing. If it hadn’t been for Turing playing around with the ‘hoots’, Strachey wouldn’t have been able to make computer generated melodies – one of the earliest advances in the field of computer music.

Turing also managed to produce ‘note-loops’.  He compiled these loops in his note-playing subroutines.  He  started  using  ‘note-loops’  in  order  to  play  around with  loudness  and  timbre.  In  order  to  make  sure  that  the  melodies  did  not  end abruptly, Turing decided to invent what was called a ‘hoot-stop’. He placed lines of code at the end of each musical routine, so that once a routine would finish, only the ‘note-loop’ installed in the last few lines of code continued to play. This is what he called a ‘hoot-stop’. As a result, Turing had managed to overcome the shortcomings of Strachey’s melodies which had abrupt pauses in between.

While the aforementioned examples depict Turing’s hands-on experience with musical composition, they do not include the application of Turing’s Test in the world of musical composition. Turing, who had worked in the field of Artificial Intelligence had composed the Test to determine whether a computer could behave as a human or not.  In  1964  Havass  used  the  Turing  Test  in  a  conference  to  check  whether listeners could differentiate between computer-produced melodies and traditional melodies. The  goal  behind  the  test  was  to  collect  feedback  and  improve  computer-based production  methods  to  suit  human  tastes.  So while Turing had created his Test mainly to delve deeper into the world of AI, it ended up becoming an important distinguisher when it came to musical melodies.

The Turing Test

It   is   commonly   believed   that   the   first   computer-made   musical notes were composed at Bell Labs in 1957.  However  by  looking  at  the  evidence  provided above,  it  can  clearly  be  seen  that  Turing  had  started  such  composition  9  years earlier. While Max Mathews invented a program to compose his notes, Turing built the whole machine. All these facts beg one question. Isn’t Turing the ‘father’ of a pre-modern digital music renaissance? The way I see it, the name does have a “ring” to it (pun intended).

Sources: 

Copeland, B Jack, and Jason Long. “Turing and The History of Computer Music.” Essay. In Philosophical Explorations of the Legacy of Alan Turing 324, 324:189–218. Boston Studies in the Philosophy and History of Science. Boston, MA: Springer, 2017.

Doornbusch, Paul. “Early Computer Music Experiments in Australia and England.” Essay. In Alternative Histories of Electroacoustic Music22, 22:297–307. Special Issue 2. Cambridge University Press, 2017. https://www.cambridge.org/core/journals/organised-sound/article/early-computer-music-experiments-in-australia-and-england/A62F586CE2A1096E4DDFE5FC6555D669.

Ariza, Christopher. “The Interrogator as Critic: The Turing Test and the Evaluation of Generative Music Systems.” Essay. In Computer Music Journal, 48–70. MIT, 2009. https://www.mitpressjournals.org/doi/pdf/10.1162/comj.2009.33.2.48.

Waugh, Rob. “Listen to the First-Ever Electronic Music Track – Made by Pioneer Alan Turing.” Essay, n.d. https://metro.co.uk/2016/09/26/listen-to-the-first-ever-electronic-music-track-made-by-pioneer-alan-turing-6153871/

Dubois, Luke. “The First Computer Musician.” Essay, n.d. https://opinionator.blogs.nytimes.com/2011/06/08/the-first-computer-musician/