First-person shooter video game inspired sound effect pack

I’ve always been a gamer, starting with games on the Wii and moving on to mobile games and games like Minecraft. Over quarantine, I became obsessed with the newly released tactical shooter game Valorant, and so decided to make a sound effect pack for a first-person shooter video game.

Pistol:

The first sound I decided to make was the pistol, which is iconic since every person gets the pistol in the game. To create a pistol sound, I decided to model it off of a 9mm pistol: https://www.youtube.com/watch?v=HwxeLrWymrI. This sound was actually a hard one to make since it was the first one that I attempted. I wanted to have a “pow” sort of sound but didn’t know how to get that nice pop of the shot. So, I stuck a sample of a pistol shot into audacity and analyzed the frequencies and the different parts.

I noticed that there were three parts that made up the gunshot sound, which seemed to have different frequencies. So, I used a filter on white noise for each of the parts and combined them together. To get the parts to play in sequence, I used the fork operation. The first attempt didn’t sound very great, so I made a new version that used reverb and added a kick to the beginning of the shot to provided a sense of power.

 

Lasergun:

The lasergun sound was not hard to do. I knew that I wanted to make a “Peeew Peeew Peeew” sound, and that was clearly modulation with a decreasing frequency. So, I did frequency modulation by having a line envelope going from 1400 to 100 as the frequency. For the generator, I chose to use a Saw wave because that sounded the most electronic. In the end, my lasergun sound reminded me of the old Galaga game laser.

 

Footsteps:

Footsteps being an important part of any 3D game where people move in the world, I decided to create this sound next. This was something surprisingly hard to do. I initially wanted to create the stereotypical footsteps that you hear when you think of footsteps in a haunted house (here: https://youtu.be/9g7uukgq0Fc?t=40). However, that is something insanely hard to do since the noise isn’t regular. While trying to create that, I used brown noise with a lowpass filter along with a percussive envelope. However, that didn’t at all sound like what I was going for. Instead, it sounded like stepping in snow, so I fine-tuned it a bit and made snow footstep sounds first.

Snow:

*Disclaimer, I’ve lived in Southern California and don’t really step in snow, so if this sounds off ooPs

Next, since there are often desert environments in games, I worked on creating footsteps in sand based on this video (https://www.youtube.com/watch?v=Lg2zxeSF7WM). This is when I came to a key realization. Each footstep isn’t a single sound but has two distinct parts. The first sound is made when your heel hits the ground and makes a thump, and the second closely follows and is when the front part of your foot flaps and hits the ground (you can verify this, try walking). Using this new insight, I shaped white noise to first start off with a lower frequency percussive sound, and after that put a larger frequency range with a customized envelope to better imitate the fluidity of sand.

Finally, I decided to try and create the clean wooden floor sound that I was going for in the beginning. To do that, I took a similar approach as how I created the pistol sound, in which I listened to each sound separately and compared it to my own. My closest rendition of the footsteps sounded more like someone stomping on a wooden plank.

 

Jump:

Jumping is an important part of FPS games to get over different terrain and to dodge projectiles. The sound that I went for is different from the regular 8-bit jump that we had an example for in class. Instead, I went for a more realistic jump. There were two parts to this: jumping off of the ground, and landing back on the ground. While designing the lifting-off portion, I knew that the sound had to rapidly decrease in volume and also be quiet since there isn’t much movement against the ground when jumping (since it’s more about the leg coiling up which doesn’t make noise). Also, I applied a Low-pass filter to the noise so that it would decrease in the max frequency since less of the foot/shoe is in contact with the ground to make high pressure/frequency noise.

For the sound when landing on the ground, it was relatively straightforward, since it was just a thump sound, which I accomplished by having low frequency sounds shaped by a percussive envelope.

 

Shotgun:

The shotgun is a classic weapon in any game with guns, and I decided to make a shotgun with buckshot (since slugs aren’t very common in shooter games), and to do that, my goal was to somehow simulate all the pellets being shot out and flying through the air. To do that, I decided to have a lot of pistol shots played slightly delayed from each other. My rationale was that staggering the sounds would provide the punch feeling when shooting a shotgun. To play the sound staggered, I took inspiration from the “river” sound effect that we went over in class. Instead of using .collect though, I just used .do and played 6 pistol sound effects next to each other. Also, I moved the kick to the beginning outside of the .do loop since the shotgun should have one initial “thumpy” blast.

I have to warn you, the shotgun sound is a bit shocking and frightening when you first hear it, so be prepared. When I played this sound to my sister with headphones, you could see when the sound played from the way she jolted, but I mean if you are shot at by a shotgun in-game it is quite frightening.

 

Fire:

Fire is a common element of games since it directly represents damage. The first fire sound that I made is the typical one that will be used in games, with just the sound of moving air cause by the fire. I created it using Brown Noise as the base, and amplitude modulated it with LFNoise 1.

This first sound can be used for molotov cocktail burning, a flamethrower, or also maybe a wizard’s fiery hands.

The second fire sound that I created is the sound of a wooden fire, where wooden logs are burning. Even though this may be less important in a shooter game, I find it to be the sound effect I am the proudest of. For this sound, I took inspiration from the fire sound effect that we went over in class. I wasn’t really satisfied with it, since the popping and cracking sounds sounded really digital and dead. After listening to a lot of fireplace sounds on youtube, I classified the sounds into 4 parts: a drone, some sizzling, a crackle, and a popping sound. The drone is the wind sound, the sizzling is boiling moisture, the popping is like wood and twigs snapping, while the crackle is with a larger air bubble that bursts and makes a loud crack. The most difficult part of this sound was figuring out how to make the popping and the crackling be random. Taking inspiration from the fire sound effect we went over in class, I researched the Dust and EnvGen classes, which allow random triggers to be generated and be used to activate a percussive envelope. I set the rate of crackles to be about once every 5 seconds, and the rate of pops to be about 5 times every second to make a really active fire.

*looking back, the pops sound a bit too percussive…

 

Dash:

Dashing is one of the most common abilities in games, and my goal was to create a dashing sound highlighting the air moving while having a pitch shift to show speed (since there is the doppler effect). To do this, I layered three sounds on top of each other: an exclusively high pitched wind noise, an exclusively low pitched wind noise, and a frequency modulated sine wave with white noise multiplied by a decreasing line envelope as the parameter to have the “woosh” effect.

 

Sniper:

What’s a shooter game without a sniper rifle? I decided to model my sniper rifle sound based on this 50-caliber sniper rifle: https://www.youtube.com/watch?v=BB9Oqf2sBZ4. There’s a few parts to this sound: the boom, the chamber moving, and the bullet casing hitting the ground (which I added for fun). For the boom, I took the explosion example we went over in class and adjusted it a bit. For the chamber moving (the clanging sound), I used a bandpass filter on white noise to hone in on a metal resonant frequency. Finally, for the casing coming out, I used an echo effect on a sine oscillator to imitate the casing hitting the ground and bouncing. I also added a kick sound in the beginning to highlight the boom of the weapon.

 

Rifle:

For this sound, I wanted to imitate the sound of an AK-47 (https://www.youtube.com/watch?v=BU-r0ElUru8). The sound mechanic for this gun is similar to the sniper, so I just got rid of the shell drop sound, and adjusted the explosion to be way shorter. Additionally, I adjusted the tuned frequencies of the sounds to get it to sound right. I also reduced the kick sound since it isn’t as strong as a sniper’s. Also, actually after making this sound, I figured out how to use a for loop to play sounds, and applied it to the previous sounds to have them play automated.

 

Reload:

Finally, if there are guns, there is bound to be reloading. There are a few parts to the reload that I was able to figure out from this video: https://www.youtube.com/watch?v=oqFmQYNBwcw: clip out, mag in, then sliding the bolt to get another round in the chamber. When finishing this sound, it took a bit of tweaking the different sound combinations to get something convincing.

Reload:

 

Gunshots+reload:

 

Reflection:

After creating all these sounds for a shooter video game, the greatest takeaway is that synthesizing sound effects from scratch is quite tedious and frustrating at times. Some sound like a lasergun or wind can definitely be synthesized with no problem, they are simple by nature. However, for other sounds like footsteps or an explosion, because they are so complicated and intricate, it may be better to just record them or create them artificially through foley methods. Nonetheless, I am glad to have been able to synthesize the sounds that I did, and if suppose I ever make a first-person shooter game, I’ll already have most of the sounds I need.

MIDI Composition!

Intro:

When starting this unit, I was so excited to be able to have greater freedom of what I could create, as I noted heavily in my previous blog post. I was overjoyed to be able to have my musical ideas come to fruition with something other than Garageband. At the same time, however, it was overwhelming finding a place to start. I would spend a lot of time just staring at my computer screen thinking about what I would want to come next, compared to the previous project which had long clips that I could just put together.

I first started my project by thinking about the bassline that I wanted, since that was something that really didn’t go well last time. The bassline from “7 Nation Army” was really stuck in my head, and I wanted something that clean but didn’t want to copy it, so it was sort of unproductive. I also tried putting a step clip with a drum sampler to make the beat but ended up switching back to the MIDI input version because I couldn’t figure out how to get more instruments to control. At the end of my first session, though, I already decided to create a piece that had a lead synth that was lively and melodic.

Early on in my project, I created a melody that I would stick with until the end:

The Thematic Melody

In the end, I ended up building off of my melody, then adding the drum/percussion, adding the bass, and then adding the ambiance/sound effects.

Components:

Melody:

My melody portion consists of a thematic repeated section spaced out with distinct other sections, as shown in the diagram below.

The labels represent: 1: Repeated “theme”, 2: transition, 3: melodic component, 4: intermission, 5: final transition

With a tempo of 120bpm, I started off fast with the “thematic” melody, which I was able to make manageable by having slower synth notes (otherwise all fast synth notes would be absolutely painful). I ended finishing the song with a variation of the last few notes of the main theme. The key signatures I tried to stick to were D Major/Minor.

When deciding the instrument that I would use, I decided to use the 4OSC plugin. Initially, I was thinking of using the Piano or Flute instrument because I was more familiar and comfortable with orchestral instruments, but I ended up using the “Solid Lead” one with was cleaner and livelier.

Drums/Percussion:

I created this part using the built-in 808 drumset in the drum sampler after making the melody. I ended up using quite a lot of drum kicks, which I really thought was able to propel the song forward and create a steady rhythm. A particularly interesting thing when creating this part was using the claves in the “Intermission” section of the melody I mentioned above. Traditionally, in my head, I always imagine basic beatboxing when I think of percussion (kick, snare, hi-hat), so experimenting with claves was something exciting and new. I found that claves serve very well as a gentler alternative to the snare, which worked perfectly for me in the “Intermission”, where I still wanted to have a beat but didn’t want something too heavy and disruptive.

Usage of claves

I also found the clap a good alternative to the snare, which produces a more dramatic effect, which I used at the end of measures at the ends of sections.

Bassline:

The bassline was not very interesting, as I was already quite satisfied with my drumline and melody. Besides a groovy bassline I used for the thematic melody portion, the rest of the bassline was essentially just supporting the melody, having the lower octave of a note played with a bass that I found in Subtractive. I found that the bass notes coincided with the timing of the kick, but I found that didn’t really affect the overall feel too much, because the note sustained longer than the kick, so the note could still be heard.

Grooovy Bassline

Ambiance/Sound FX:

Originally, I planned to have a synth pad sample that would serve as the ambiance, but the key was just way too off, so I had to delete it. Instead, I switched to flowing water as the background for a couple of the sections. The motivation for doing this was because 1) I remember this idea from before and wanted to try it, and 2) I thought that flowing water would serve to make my piece more “liquid” and more casual with the random pitches from flowing water. Of course, I had to use an equalizer to get rid of a lot of the excess white noise and used a fade out to make the noise transition better, but I think it turned out decent. I also decided to add a “sound effects” section to make my piece livelier and not just purely instrumental. I chose to use the “YO” sound effect because I thought it would be a good transition between sections that gave a more human element to my piece. I admit, though, that for both of these, I added them at the end, so I didn’t really see these as necessities, so I gave them a smaller role compared to the MIDI components.

Plugins/Panning

Panning:

In all cases of panning, I used it with automation, to have the music change between the left and right inputs. I used this in two places: the main melody and the “YO” sound effect. My main melody repeated 3 times, the second time an octave lower than the first, and the third time an octave higher than the first. I panned the first centered, the second part left, and the third part to the right. I thought that this would create a cool contrast, but I feel like in the end the reverb/delay may have made the experience less optimal. For the “YO” sound effect, I used automation to make panning occur during the sound effect to create the effect of the “YO” passing by.

Melody Panning Automation
“YO” Panning Automation

Equalizers:

I used an equalizer on each track, using them as more of a lowpass/highpass filter style. For the melody, I removed the higher frequencies that were produced from the synth to get a cleaner and less jarring sound. For the bassline, I also got rid of the higher frequencies, but this was not a very noticeable effect. For the drumline, I increased the higher frequencies, to emphasize the claves more and give a bit more commanding power to the snare. As I said above, for the water sound, I got rid of a lot of the low-frequency noise to get a sound more similar to water dripping. Finally, for the “YO” sound effect, I essentially made a bandpass filter that centered around the human voice range.

Drumline Equalizer

Compressor:

I used a compressor for each of the tracks to get rid of the jarring loud sounds that sometimes would end up clipping. I feel the most important part was applying the compressor to the drumline, which had the loudest output that was sometimes just way too dominant. The effect of the compressor on other tracks felt very minimal.

Reverb:

I added reverb to make sounds way larger since my composition was basically all MIDI and felt too “artificial”.

Challenges:

I was surprised by how difficult it actually was to compose my own music. I haven’t ever made a whole musical piece before, and basically just sometimes generate melodies in my head, so creating something succinct and blending together was difficult. Also, with so much freedom in MIDI, I didn’t really find much use for recorded samples in my project besides serving as the background or for sound effects. Also, the MIDI interface was a bit frustrating, as I kept switching between the pen and the selection tool using the mouse. But otherwise, it was a fun and enjoyable experience.

Overall:

Final Project Window

This project really forced me to think more musically when creating a piece. I learned quite a lot about what things sound good together from getting a hands-on experience of literally making the notes one by one, and how difficult it can be. I now have a greater appreciation for those that are good at making music, since it seems like a really precise form of art still with a good degree of freedom to it. In the future, I hope to try and recreate retro pop songs in Waveform in MIDI and make my own parodies/dubs. Anyways, below is my piece, and don’t be afraid to roast me!

My First Attempt at Making Music on Waveform

My desired piece

When first starting off, I wasn’t too sure what I would end up with. I knew that I wanted something relaxing and steady, but besides that, nothing was really set. I have never actually composed music before, and I always imagined it as harmonizing different instruments and creating an instrumental score. So, when it turned out that we had to find tracks to mix, that in some ways made it harder, since I couldn’t use any of the instrumental/music theory skills that I had, but at the same time made it easier by removing the hassle of getting an instrument and recording something crisp from it. As I began looking through more and more clips, I decided that I would go ahead and create something that had a steady beat propelling the song, and also have a few sections of melody/chorus.

The creation process

The beginning:

I started my process of learning waveform by binging all the Tracktion quickstart videos on youtube and following along. I was able to get a sense of how to record tracks, move them around, apply basic plugins, and gained a good understanding of the user interface, which helped me when actually mixing the music. After watching the videos, I downloaded the Sanfilippo Imagina drum presets and played around with how to get them into Waveform. It was interesting to see how there could be multiple layers of clips for each preset, and each one could be adjusted to our needs separately. I didn’t need that feature, though, so I just inserted the presets as single clips.

The sounds I used:

I was able to incorporate looping percussion, bass, and ambient sounds, and also had water bottle recorded “bell” sounds, the chorus, and the intro. I first started off with the melody track, which was a catchy loop that I found online, and sped it up to 100 bpm. I next experimented with choosing the right drums that would drive the piece. There were so many loops to choose from in the Imagina loops, and I ended up picking a standard verse drum loop for the melody. I next added a bass line to see how it would sound, but it was just so terrible because it was too bouncy for my piano melody, and was in a very dissonant key relative to my melody. After unsuccessfully trying to find a good baseline, I decided to leave that for later and began searching for an intro clip. I found a beautiful one with chimes, and I knew I had to use it. As for the ambiance, I picked a soothing and steady synth one which would give an airy feel that goes well with the piano melody, and I made it start after the first couple measures of the melody to give a greater sense of buildup. Finally, for the bass, I chose a synth bassline. At first, I thought it was first way too staticky, but after learning about filters I easily solved that problem. I also used a modified version of the bassline to serve as the chorus. For my recorded clip, I recorded mono audio of my water bottle being struck, which produces a nice ring. I chose to record this in the bathroom so the acoustics would be amplifying the resonance.

Audio Plugins/Effects I used:

For the intro clip, I wanted a plugin that ramped up the tension at the end of the clip. At first, I thought a phaser would serve the purpose, but experimenting with that, I realized the point of the phaser is just to make things sound wobbly, which was the opposite of what I wanted. Looking around for more plugins I could use, I quickly turned to the pitch shifter, applying automation to increase the pitch near the end, which makes it seem like entering another dimension and sounded cool to me. For the percussion, I applied an equalizer to increase the lower bass sounds and increase the effect of the hi-hat, which was a bit too quiet. Another major plugin that I used was the low-pass filter for the bassline, to get rid of all the staticky higher-pitched noises of the synth until the chorus, which made it sound much better and placed a greater focus on the piano melody rather than the synth. For a few other clips, I added reverb to make it sound BIGGER, like with the water bottle “bell” and the ambiance. I also placed in audio automation so that the overall volume would peak near the middle, and dissipate near the end.

Challenges:

The hardest part of creating this project was finding clips that meshed well together. As I said before, I was expecting to be able to in a way compose music using virtual instruments, so finding good clips was rather difficult. I attempted to use the pitch shifter plugin to change the pitch to better match my track, but that proved futile and lead to quite a bit of distortion. I first searched for clips and sounds on freesound.org, but they didn’t have too great a selection of clips, so I was able to find looperman.com, which was a decent site for content. 

Also, another difficulty was that I had no experience mixing or really thinking about how to mix music before, so I wasn’t really sure whether the clips I put together were natural or very artificial. For example, I’m not sure whether synths and pianos go well together, but I thought it sounded great and included them in my piece. As I get more experience with using the software, looking for clips, and listening to music analytically, I should be able to get a better sense of what goes well together.

Also, a difficulty I experienced near the beginning of the assignment was navigating the interface. I was frustrated that scrolling didn’t do what I expected it to, and it was a pain to try and resize all the windows only by clicking. However, as I worked more with Waveform though, this problem became less and less relevant.

Conclusion:

I can’t wait to learn more about how to use MIDI so that I can create more custom sounds. Also, the next piece I create I hope to make my parts more closely related and sound better together. The piece I created this time has both minor and major intervals, and as I listen to it again now that doesn’t really sound the best. Anyways, I hope to be able to learn more about Waveform and create some more exciting music. I have attached my piece below, please let me know what you honestly think so I can make it sound better!

Microphone Frequency Analysis: Google Pixel 3

The Google Pixel 3

I’ve always been an Android person, and last year I got the Pixel 3 with its superior camera and wide-angle selfie lens, and the amazing Android system. The Pixel 3 is also interesting in that it has two front-facing speakers to provide good video viewing stereo audio. However, I was surprised to find out that the Pixel 3 also has a total of three microphones: one on the top edge of the phone for noise canceling, and a microphone next to each of the speakers which both can be used for recording. Interestingly, when using the recorder app built into the phone, only the bottom microphone was used (probably the one used for calling), but when using another app, I could use both mics to record stereo(which I found out through this video). 

After a long time researching, however, I still couldn’t find much information about the specifications of the microphones. The online analyses mainly focused on the speaker quality instead (ex: here and here). At this point, I was extremely confused, and read somewhere on a Reddit post about converting some audio from 48kHz to 44.1kHz (for better CD compatibility of the sort). I then researched the processing of audio on Android and found that Android 5.0 and up records at 48kHz and automatically resamples audio as necessary (cool vid about how the resampling works here). So, I was able to determine that my phone would record audio at 48kHz. As for the bit rate, I was not as lucky, and know that for sure my phone’s mic can record between 16 bits to 32 bits. 16 bits is a minimum requirement these days for cell phones and an app that I used automatically saved the recording to 32 bits (which I don’t know if is accurate). Regardless, 16-bit audio is good enough for most purposes.

 

Recording Apps

There are so many lists online when you search for the “best android recording app”. The problem is that a lot of the lists are either outdated, with some of the apps no longer on the play store or with terrible user interfaces, and a lot of the lists also feature apps that have a hefty price. Honestly, looking for an app with all the features tempted me to try and code my own Android app with the features that I wanted.

In the end, I ended up keeping two apps that were decent:

Advanced Audio Recorder(Stereo Sound)

This app was released a very long time ago but still serves the purpose of providing stereo audio, but isn’t really ideal because of the lack of the ability to check bit depth info and the audio recording was very quiet.

WaveEditor for AndroidTM Audio Recorder & Editor

This application ultimately ended being the one that I used, with a neat interface and plenty of features, even complete with an audio file type converter. I was able to record .wav files at 48kHz with a supposed 32 bit quality. Below are some Screenshots of the app interface.

The Recording Process

Generation of Audio:

I used Audacity to generate two tracks of audio to play:

  1. A sine sweep (chirp) from 100 Hz to 18kHz over a period of 15 seconds with constant amplitude (0.5)
  2. White noise with an amplitude of 0.8 for 15 seconds

Playback Device: My Laptop Speaker (Dell XPS 13 9350)

When deciding the playback device, I was considering using my headphones, but I thought that they were meant for short-range so settled on using my laptop speakers. My laptop has two speakers, one on each side of the computer, so I used the left speaker as the main audio source.

Recording Location:

I recorded in my dorm room, where I thought that it would be very quiet, but there may have been a bit of ambient noise from the fan next door. I recorded on my bed, having the laptop on my bed (while not covering my speakers) and holding my phone on my bed as well.

Recording Method:

For both the sine sweep and white noise, I varied the distance, also testing for what happens when I increased the speaker volume or instead of holding the phone and pointing the microphone to the speaker, I set it down so that the sound would pass over the microphones.

Recording Results and Analysis

The Data

For each of the recordings, I trimmed the irrelevant portions and utilized the “Plot Spectrum” function on Audacity. I created two graphics, as you can see below:

 

Observations

  • The general shapes of the frequency response curves for each type of tone were the same.
  • For all recordings, the max frequency recorded was around 17156 Hz.
  • All recordings seem to have a trough at around 16kHz before peaking one last time.
  • All recordings seem to have a trough at around 12.4kHz, 6.5kHz, 3.1kHz.
  • What’s interesting to note is that the cases where the microphone was right next to the speaker have the most even spectral distributions.
  • Also, its good to note that in general moving the mic closer to the sound source or increasing the volume of the sound source leads to greater recorded decibels and more frequencies being captured.

Conclusions

The microphones for the Pixel 3 are capable at recording stereo audio with frequencies from 100Hz to 17.2khZ. The microphones work better when the sound source is closer, and there are noticeable frequencies that the microphone consistently does not register properly, especially at longer distances. Nonetheless, I still think that the Pixel 3 is one of the best video and camera smartphones in the market 🙂

Other 

Google Drive link to original audio recordings: https://drive.google.com/drive/folders/10-3owRe-hhwJKmCfkiL0x4TglCY9FG76?usp=sharing

The Creation of the Illiac Suite

Named after the computer that created it, the Illiac Suite was a revolutionary piece in the history of music as the first piece of music to be generated by a computer. 

Lejaren Hiller – A creator of the Illiac Suite

Lejaren Hiller was born in New York City on February 23, 1924, and had an early interest in music, learning how to play the piano, oboe, clarinet, and saxophone as a child. In high school, he even already knew how to write orchestral music. Throughout his academic career, Hiller maintained his love for music and composition while pursuing the sciences simultaneously. Hiller attended Princeton University to study Chemistry in 1941 and graduated with a Ph.D. in 1947. During his time at Princeton, he also studied musical composition with Roger Sessions and Milton Babbitt, both accomplished composers in a wide variety of musical genres including orchestral music and more experimental genres.

Hiller, in 1959

Hiller’s professional career was filled with work in both music and chemistry. After obtaining his Ph.D., he landed a job at DuPont (largest chemicals company that created products like Styrofoam, Kevlar, etc) where he worked for 5 years as a research scientist. While there, he continued composing, even finding the effort to run a small concert series. In 1952, Hiller began teaching chemistry at the University of Illinois. While there, he worked towards an M.M. (Master of Music) in composition. This series of events set Hiller up to use the ILLIAC I for the composition of the Illiac Suite in 1956 with Leonard Isaacson, which was a pivotal point in his musical career.

Later on in his life, Hiller would diverge from Chemistry to further his work in computer music, creating more experimental works such as Computer Cantata (1963), Algorithms I-III (1968-72), etc. He would go on to help found the Electronic Music Studio at the University of Illinois and would go on to teach composition at the State University of New York.

Hiller’s life was overall very vivid, with him able to find the connections between the scientific method of chemistry and applying that to methodically creating music. Hiller developed encephalitis in 1987 and passed away in 1994. Hiller mentored many throughout his career and his revolutionary work like the MUSICOMP coding language inspired many generations after him to pursue musical composition.

 

The University of Illinois at Urbana-Champaign-Birthplace of the Illiac Suite

The University of Illinois at Urbana-Champaign was founded in 1867 and is the start of the University of Illinois system. In 1952, the ILLIAC I (Illinois Automatic Computer) was built by the University of Illinois, the first computer built and owned by a US educational institution. This is where Lejaren Hiller and Leonard Issacson lead the programming of the ILLIAC I to create String Quartet #4, aka the Iliac Suite. Now a little bit more about the creation of the ILLIAC I. The original architecture for the computer was extracted from a report from the Institute for Advanced Study at Princeton. The University of Illinois actually built two computers at the same time, but the ILLIAC I’s cousin, the ORDVAC, was delivered to the US Army in exchange for funding for the ILLIAC with the same design. The ILLIAC I was extremely bulky, with 2800 vacuum tubes and weighing around 5 tons. Besides being used to create computer music, this computer was also used to calculate orbital trajectories and was retired when the ILLIAC II was created in 1963. The University would go on to create 5 more ILLIAC computers, with the Trusted ILLIAC completed in 2006.

ILLIAC I

ILLIAC I memory drum
The ILLIAC I
The ILLIAC II

Today, the university serves as a premier research university, with the library system being the second largest in the US and having the fastest supercomputer of all university campuses. The university includes 30 Nobel laureates, 27 Pulitzer Prize Winners, 2 Turing Award winners, and 1 Fields medalists. From the National Center for Supercomputing Applications to the countless discoveries in the computer and applied sciences, The University of Illinois at Urbana-Champaign is one of the top locations for technological innovation.

The Musical Piece

In 1957, the Illiac Suite was generated by Lejaren Hiller and Leonard Isaacson for a String Quartet. When deciding what instruments that the composed music would be played on, technical limits excluded the use of electronic instruments and the piano. Instead, a string quartet was chosen for the ability to create 4 distinct voices that were similar in timbre.

First Movement

The first movement of the Illiac Suite was designed to be polyphonic, with a strict counterpoint for connecting each of the melodies. The rules set for this piece included melodic continuity, harmonic unison, and various rules relating to the transition between the voices. Listening to the piece, the tempo shifts from fast to slow, and then speeds up near the end again, with all four voices present at the end.

Second Movement

The second movement of the Illiac Suite was created with the purpose of showing that musical notation could be encoded. This piece features a four-voice polyphony, but all 4 voices are present the entire piece and form a beautiful soothing harmonization based on the more complex programming available.

Third Movement

The third movement contrasts with the previous two movements greatly, not only being played Allegro con brio, but also having a more chromatic approach that mirrors that of contemporary music. Because the original musical generation was far too dissonant, the researchers added more traditional rules that would smooth out some of the jarring sections. This piece is also unique in using a lot of pizzicato and a greater selection of string instrument sounds.

Fourth Movement

The fourth movement is fundamentally different from the previous movements in that the previous movements utilized traditional rulesets in the generation of the music, whereas the generation of this movement employed the use of Markov chains (where the choice of the next note is completely based on probability and the previous note). Despite not using any standard musical techniques, the music created closely resembles modern contemporary music.

Pros and Cons

The creation of the Illiac Suite garnered a lot of negative criticism, as composers claimed that generating music gets rid of all the joy and pleasure while manually creating music and devalues the time and effort put into composing music. Despite the criticism, it was fascinating how a machine was able to create convincing music that sounded great. However, there were shortcomings of the Illiac Suite. Even though the chords and melodic connection of the piece was close to flawless, the piece lacked a sense of direction and purpose. There wasn’t the intuition to make the piece guided to a climax or have the flow of themes within a piece, as the rules did not include that. Despite that, the ability to generate music based on simple rules is still astounding.

Sources:

The Editors of Encyclopaedia Britannica. “Lejaren Hiller.” Encyclopædia Britannica, Encyclopædia Britannica, Inc., 19 Feb. 2020, www.britannica.com/biography/Lejaren-Hiller.

Ewer, Gary. “What the ‘Illiac Suite’ Taught Us About Music.” The Essential Secrets of Songwriting, 27 May 2013, www.secretsofsongwriting.com/2013/05/27/what-the-illiac-suite-taught-us-about-music/.

“Illiac Suite.” Musica Informatica, www.musicainformatica.org/topics/illiac-suite.php.

“ILLIAC.” Wikipedia, Wikimedia Foundation, 5 Sept. 2020, en.wikipedia.org/wiki/ILLIAC.

Nunzio, Alex Di. “Lejaren Hiller.” Musica Informatica, 2017, www.musicainformatica.org/topics/lejaren-hiller.php.

“University of Illinois at Urbana–Champaign.” Wikipedia, Wikimedia Foundation, 8 Sept. 2020, en.wikipedia.org/wiki/University_of_Illinois_at_Urbana–Champaign.

“University of Illinois School of Music University of Illinois at Urbana-Champaign.” EMS – History – Lejaren Hiller | Music at Illinois, music.illinois.edu/ems-history-lejaren-hiller.