Platformer Sounds in SuperCollider

Platformers are a genre of games that involve heavy use of climbing and jumping in order to progress. Examples include Super Mario Bros, Hollow Knight, VVVVVV.

~Inspiration~

Over Thanksgiving break, I played a lot of games… maybe a little too much! This project is heavily inspired by a few games. For example, I picked up Hollow Knight which is a fun yet frustrating single-player platformer game. The sounds of jumping, using items, ambient sounds, and swinging a nail followed me to my sleep. I thought that I could try replicating some of them for my final project.

Ambient Sounds

Snowy Environment

The first piece of the sound pack I started working on were the ambient sounds. These would be used to indicate the environment the current level of the game. I began by creating a snowy feel using the LFNoise1 Ugen in a SynthDef.

 

 

 

At first, I had trouble configuring the audio signal to work with the envelope. At first, I only had the attack and release time defined for the percussive envelope. This caused the audio rate to be released linearly, which I did not want. Instead, I wanted the sound to be at the same sound level until the end where it should level off. To remedy this problem, I used the curve argument of Env and set it to 100 instead of the default -4.

Here’s the resulting sound clip:

Rainy Environment

Moving on to the next ambient sound I created, I decided to introduce a rainy environment. I used a similar approach to creating the instrument used for the snowy environment. Instead of using a low pass filter, I opted to use a high pass filter instead. I also changed the frequency to more accurately capture the sound of hard rainfall.

 

 

 

After that, I applied the reverb effect to it using the Pfx class.

Background Music

For the background music, I wanted to experiment with some orchestral sounds. It was pretty difficult trying to get a convincing SynthDef for strings, so I looked at sccode for ideas. I found that someone made some code that mapped samples to midi notes in the link here so I used that as the basis for the granular SynthDef which uses samples provided by peastman. It uses the buffer to load the instruments. Here’s the code that maps the instrument samples to MIDI notes. It’s a long block of code so you’ll have to look at the image in another tab to view it.

Now that the instrument has been defined, I decided that I want the music to have some chromatic notes to produce an unnerving sound. Something that would be played during a final boss fight. I configured it to use the default TempoClock at 60 bpm and I kept it simple with a 4/4 time signature. It is mainly in E major, but as I mentioned, there’s some chromatic notes like natural Cs and Ds. Here’s the resulting clip.

Walking

Moving on to the sounds that would be triggered by events, I started by creating the walking sound. I looked back at the sound_fx.scd file to follow this piece of advice given:

Wise words

I recorded some sounds from a game and put them into ocenaudio and Audacity to use their FFT features and here were the results from ocenaudio, highlighting only the portion of a single footstep sound.

ocenaudio footstep FFT.

I noticed that frequencies around 1000hz and below were the most prominent, so the frequency of the footstep should probably be emphasized around there. The audio recording includes pretty loud ambient sounds, so that probably explains the higher frequencies.

I attempted to replicate this sound using a SinOsc Ugen and a Line to generate the signal. I used the Line for the frequency argument of the SinOsc because it allows me to have more flexibility knowing that a footstep does not have a constant frequency. I configured the envelope to start at 300 and end at 1, resulting in this footstep sound which I ran through an infinite Ptpar.

SynthDef(\walking, {
	var sig;
	var line = Line.kr(200, 1, 0.02, doneAction: 2);
	sig = SinOsc.ar(line);

	Out.ar([0,1], sig * 0.6);
}).add;

~soundFootstep = Pbind(
	\instrument, \walking,
	\dur, Pseq([0.3],1)
);

(
Ptpar([
	0, Ppar([~soundFootstep], 1)
], inf).play;
)

Jumping

For the jump sound effect, I wanted to have a bubbly type of sound. I used a similar approach to the footstep SynthDef, but I made the Line rise in frequency instead of descend. Also, I made the envelope and Line sustain significantly longer.

ocenaudio jump FFT

I took note of the FFT of the jump sound from the game, but it didn’t result in the type of sound I wanted. It was really harsh so I modified the frequencies a bit resulting in this sound clip

SynthDef(\jumping, {|time = 0.25|
var sig;
var env = EnvGen.kr(Env.perc(0.01, time), doneAction: 2);
var line = Line.kr(100, 200, time, doneAction: 2);
sig = SinOsc.ar(line);

Out.ar([0,1], sig * 0.8 * env);
}).add;


~soundJump = Pbind(
\instrument, \jumping,
\dur, Pseq([2],1)
);


(
Ptpar([
0, Ppar([~soundJump], 1)
], inf).play;

)

Landing

I had a GENIUS idea of continuing to use the Line class to create my signals. You’ll never guess how I achieved the landing sound effect. Well, you might. I just swapped the start and end frequencies of the line. It ended up making an okay good landing sound, which sounds like this

I tweaked a little more and changed the end frequency to be lower at 10, which I think sounds a bit better.

SynthDef(\landing, {|time=0.1|
var sig;
var env = EnvGen.kr(Env.perc(0.01, time), doneAction: 2);
var line = Line.kr(200, 10, time, doneAction: 2);
sig = SinOsc.ar(line);

Out.ar([0,1], sig * env);
}).add;


~soundLand = Pbind(
\instrument, \landing,
\dur, Pseq([2],1)
);


(
Ptpar([
0, Ppar([~soundLand], 1)
], inf).play;

)

Picking up an item

When I brainstormed what kind of sound to give the item pickup, I thought of Stardew Valley and its sounds, specifically the harvesting. I used this as an excuse to take a break from homework to play the game for research purposes. The sound of picking up items sounded pretty simple. Here’s what I came up with using this code

SynthDef(\pickup, {|freq|
var osc, sig, line;
line = Line.kr(100, 400, 0.05, doneAction: 2);
osc = SinOsc.ar(line);
sig = osc * EnvGen.ar(Env.perc(0.03, 0.75, curve: \cubed), doneAction: 2);
Out.ar([0,1], sig * osc * 0.8);

}
).add;

~soundPickup = Pbind(
\instrument, \pickup,
\dur, Pseq([1],1)
);

(
Ptpar([
0, Ppar([~soundPickup], 1)
], inf).play;

)

Throwing away an item

For the final sound that I created for this project, I made something to pair with picking up an item. I wanted it to have a somber tone that elicits an image of a frown. The disappointment of the item being thrown emanating from itself. As sad as the fact that this class is ending. Anyways, here’s the sound!

SynthDef(\throw, {|freq|
var osc, sig, line;
line = Line.kr(400, 50, 0.2, doneAction: 2);
osc = SinOsc.ar(line);
sig = osc * EnvGen.ar(Env.perc(0.03, 0.75, curve: \cubed), doneAction: 2);
Out.ar([0,1], sig * osc * 0.8);

}
).add;

~soundThrow = Pbind(
\instrument, \throw,
\dur, Pseq([1],1)
);

(
Ptpar([
0, Ppar([~soundThrow], 1)
], inf).play;

)

Reflection

Doing this project has made me more appreciative of the sound engineering of every game I play. I now find myself analyzing the different sounds that the developers choose to incorporate and sit in thought how they made that. There’s surely some, but not many, that use programming languages like SuperCollider to synthesize such sounds. Exploring new concepts like buffers has been pretty challenging, but it has shown me that there’s a wide range of choices available with SuperCollider.

Thanks for reading my writeup. Hope you enjoyed!

Using MIDIs in Waveform

The Inspiration

Scales
Lighter keys are part of the scale

I used Daft Punk’s Short Circuit as the inspiration for this song. I liked the chords that the second half of the song had so I did my best to recreate it in Waveform. This gave me a chance to experiment with scales in Waveform and learn to sequence notes to form chords. I used the G#min scale and Waveform is pretty helpful with working with scales because when you set the key, the piano roll is highlighted with notes on and off the scale as seen on the right.

I wanted to have a similar structure to the song so I decided to create two sections. The first is more crunchy while the second section is more atmospheric. The song is in common time.

The Skeleton

First Section – Crunchy

 

MIDI Chords

I used the Crystal 2 preset from the 4OSC plugin. I split the chords into two by duplicating them and applying slightly different effects to each track. For example, the first track uses a phaser.

Chords 4OSC Settings

I automated both of the tracks containing the main chords to change the pulse width of the 4OSC plugin, giving it a more distant sound after the 8th bar.

Drums

The drums follow a simple pattern, not really sure how to describe it. I’m not sure if it’s considered syncopated but here’s how the sequencer looks like for the rhythm. I used the Drum Sampler with the 909 preset for this one and adjusted their sensitivity to get the sound I wanted.

Drum Beat for 1st section

I set the velocity pretty low because I did not want the drums to overpower the other instruments.

Grungy Lows

I used the Subtractive plugin and found a preset called “Kill the Woofer JH” and it sounds exactly like what you think it would sound like. I inserted a step clip to see what it would be like if I used one of those not for the drums. It still worked and I found a pattern that I liked.

Second Section – Atmospheric

Airy Chords

I used the same chords as the first section but only with one track this time. I used the same 4OSC plugin and found and used the NC-17 to increase the sound level.

Bass

I downloaded one of the additional plugins from the Tracktion Download Manager and used the RetroMod 106 plugin for this track. I used it as a lead out to the end of the song and it blended pretty well with the atmospheric

Strings and String Break

Originally, I wanted to add a string solo to the second section. When I rendered a quick export of the piece in MP3, it cut out a measure of the string and produced what sounded like a string breaking. You can hear it at around 2:20 in this first render. (Turn down your volume, it’s normalized)

Instead of a string solo, I wanted to recreate that effect to create a feeling of unrest. I couldn’t find an exact way to recreate the sound so I just decided to add a note from a chaotic-sounding MIDI. I found a preset called “Waveform Percussions” in the Subtractive plugin. It’s an interesting one because it seems to be randomized on every playback and it does the job in different ways.

Using Plugins with MIDIS

A lot of plugins do not work with MIDIs, but there are a few that do. Here are some of the ones I experimented with:

  • Reverb – does not work with MIDIs
  • Low Pass Filter/High Pass Filters – do not work with MIDIs
  • Phaser – works with MIDI. I used it with the 4OSC plugin.

Mixing

After finishing up the main components of my piece, I decided to listen for any changes that should be made to make the sounds complement each other better.

Drums

4-Band Equaliser
4-Band Equaliser on claps

At first I kept the drums constant throughout the whole piece but I changed that when I felt that it didn’t really fit the second section. Instead, I replaced the second section’s percussion with claps, which I had to turn from a sample into a MIDI to apply Reverb to it. When I applied reverb, it had a lot of highs that was just really unpleasant to hear, so I used the 4-Band Equaliser plugin to get rid of those annoying frequencies.

Bass

During my mixing stage, I felt the bass was too intrusive in the outro so I decided to change the instrument used from the plugin. The bendy bass was called Alpha Juno Bass and I used Juno-60 Bass 02 instead. It mixed better with the atmospheric outro and did not stand out too much from the other instruments.

Main Chords

I panned the two tracks separately; the first is slightly more to the left channel and the second slightly more to the right.

Audio Bus

I noticed that two of my tracks both use phaser so I rerouted them to my final track which I called the Phaser FX track. I applied it to both the grungy lows and primary chords from the first section.

Here’s the result. I had a few troubles getting the volumes right because it sounded like they were at the right levels during playback on Waveform, but exporting them completely changed the levels. I definitely could use some improvement because the second section turned out louder than it should have been.

My Waveform Experimentation

My aims for this project

To explore the possibilities of Waveform, I wanted to create an arrangement of sounds that produces unexpected results. For example, I turned a drum loop into an ambient track. The goal is not for the piece to be harmonious. I simply wanted to explore the different sounds and strategies available before focusing on tonality.

Features

  • Spooky ambient noise
  • Foley sound effects recorded with Samsung Galaxy S10
  • Vocals recorded with iPad Pro 2020

Recording Environments

I recorded clips in two different environments with different acoustic settings.

1) An isolated room in the Bass Library with 3 brick and 1 glass wall

Bass Library Room

This is a rectangular room with many surfaces for sound to bounce off of, so naturally there is a lot of echo. I recorded one clip before the melody comes in. The audio is noticeably reverberant.

2) My (quiet at the time of recording) bedroom

My bedroom

This room had much less echo than the first room I recorded in. Perhaps it was with the help of Stiles’s lack of right angles. I recorded the outro vocals here.

Samples

I used samples from the SSLIB library and some loops from the addons included in the Tracktion Download Manager. Some of the drum loops turned out to be MIDI clips which required me to assign a plugin in order to produce its sound. I chose the Hypnotize plugin. This is how I created the ambient noise in the beginning of the track. It was originally a drum loop from the Riccardo Lombardo collection, but it required a plugin to output audio. For some of the samples, I used the looping feature and for some I copy-pasted portions of the clip to remain in rhythm.

Plugins

My most used plugins were:

  • Delay
    • I feel delay creates a nice effect on songs. I used to play with delay pedals on guitar and I wanted to experiment with this plugin and it did what I wanted to. You can set the length and delay to your liking.
Delay settings
  • Pitch Shift
    • Used this to modify the rising pitches of the vocals. Even with pitch shift, my voice still doesn’t sound good!
  • Reverb
    • I used this on the metal sound effect that I recorded to give it a more directional and atmospheric feel. It felt flat with the original recording and adding reverb made it feel more present.
  • Chorus
    • This plugin gives a fuller sound to the recordings. I had to make sure I didn’t overdo it because it sounded very artificial when the chorus is set too high.
  • Low Pass Filter
    • I used this plugin to cut out the airy highs of my recordings from my mic. I did not have a dedicated microphone at my disposal so my recording quality was quite subpar. By the way, the Bass Media Equipment Checkout has reopened and you can borrow microphones for a weekend! I’m going to be trying out an XLR mic and an audio interface for the first time on Friday. Here’s a link to the reservation page

Automation

To be honest, seeing this word intimidated me, although it wasn’t at all complicated in reality. I thought it would involve scripting but it’s just adjusting a few settings to your liking. It’s much like drawing when and where you want the sound to do what.  After several minutes of tinkering with using this function, I found there’s different ways you can apply automation.

Automation button
  1. Pressing the A button and applying from there
  2. Dragging the A button to a plugin or the volume
  3. Creating a new subtrack by pressing the + button below the A, then applying the automation there using options 1/2

I found #3 very useful. By creating a new subtrack, you can more easily view which automations you have applied without cluttering the track space. This creates a smaller track under the original track and then allows you to apply the automation in separate rows, greatly helping manage tracks with many automations and plugins.

Voice Panning

For my automations, I chose to use volume changing and panning.

When the melody changes its pattern, I applied an automation to the bass to lower its volume until the melody reverted to its first pattern.

In the outro vocals, I used audio panning to oscillate between the left and right channel.

The Product

Samsung Galaxy S10 Microphone Report

Recording Capabilties

The Samsung Galaxy S10 boasts two microphones, giving users the option of stereo recordings. There is one microphone on the top of the phone and another at the bottom.

It is able to record at 48kHz at 320kbps and supports 32-bit depth audio.

S10 Microphone Locations

Report

Model: Samsung Galaxy S10 Snapdragon Edition

App used to record: ASR Voice Recorder (Used to record frequency sweep from ATH-M40x)

Recording Setup
Recording Setup
  • Has gain switch
  • Allows user to choose microphone
  • Lossless recordings with FLAC
  • Sample rate up to 48kHz

ASR Settings:

  • FLAC
  • 48kHz
  • Stereo
  • Gain set to 0
  • Noise Cancellation Off

Settings for generating the sine sweep in Audacity:

  • Linear

    ASR Settings
    ASR Settings
  • 100Hz-18000Hz
  • 0.8 Amplitude
  • 15 seconds

Results

 

Frequency Analysis of Test Environment #1

According to Audacity’s spectral analysis, the S10’s frequency response seems to be moderately flat, apart from some dips near the 8000Hz and 12000Hz frequencies. It was able to record over 20kHz, but the generated tone only reached up to 18kHz. There may have been faint overtones generated within the recording environment. With these results, I wanted to do another test, this time

Recording Setup #2
Recording Setup #2

This time, the directionality is obviously more spread out. Here are the results of this setup:

 

Results of Test #2
Results of Test #2

 

 

 

 

 

 

 

Moving the headphones further away resulted in a more accurate frequency response, but a much bumpier frequency response. There is a trough from the frequencies 100-300Hz, and there are few frequencies above 18kHz.

 

Conclusion

The Samsung Galaxy S10 is capable of reproducing sound well, although with a bit of inaccuracy. It has shown its ability in being able to hear frequencies over the range humans can hear, but those frequencies sometimes are not present. Overall, the microphone does its job well in general cases. It may not be the best for recording your fire mixtapes, but it is useful for general purpose recording like casual singing or interviews.

Max Mathews, Mortuos Plango, and IRCAM

Max Mathews

Trailblazer in electronic music

The background of the pioneer of sound synthesis

Mathews contributed greatly to the advancement of electronic music with his groundbreaking creations such as the MUSIC-N (from I to V) programs and the GROOVE. How in the world was he able to take such large strides in electronic music production?

One would likely assume that lifelong familiarity with musical instruments would be logically connected to the creation of Mathews’ works, but that is not the case. Mathews learned how to play the violin in high school, but he has called it an “inefficient music instrument” because it required much more practice than other instruments (40 hours a day?) and he stated that he would rather learn the computer or instruments he creates.

Max Mathews with Radio Baton
Mathews with the Radio Baton and the computer Image Credit: Computer HIstory Museum

His educational background majorly specialized in electrical engineering. He earned his Bachelor’s Degree at Caltech and received his PhD from MIT, both in electrical engineering. Following his studies, he went on to serve as the director of Acoustical and Behavioral Research Center at Bell Laboratories in 1965. In 1987, he became a professor for the Center for Computer Research in Music and Acoustics at Stanford University.

Max Mathews’s research contributed to the development of several programming languages and programs with applications in music beyond the MUSIC-N such as GROOVE (a system that stored the musical inputs from an external instrument), Graphic 1 (used for graphical sound synthesis with the use of a light pen), and more.

Mortuos Plango, Vivos Voco

Directly translated to “I mourn the dead, I call the living”, this work by Jonathan Harvey used the MUSIC V program by Max Mathews to manipulate recordings’ sounds which allowed Harvey to bridge the two main sounds in the piece: his son’s vocals and the tolling of the bells. This manipulation of sound produces a surreal and haunting effect, especially as the bells and the vocals transform into each other back and forth. The manipulation of sounds was impressive. After recording the sounds used in the piece, they were changed with great possibilities, it has to have the ability to “move seamlessly from a vowel sung by the boy to the complex bell spectrum consisting of 33 partials” in a BCC section on electronic music.

An octophonic piece

Example of octophonic arrangement
Example of octophonic speaker arrangement

Harvey’s work is able to achieve its eerie atmosphere with effective use of octophonic sound, where audio is reproduced using 8 channels, one for each speaker. You may be familiar with 8D audio edits on YouTube. Those use the same concept of octophonic sound to produce a sense of direction of where the sound is originating from. Harvey uses it heavily in his work and is largely responsible for the eerie atmosphere of the work, with the bells tolling across the listener’s ears and the Harvey’s child rapidly moving around the perceived soundstage.

 

 

Institute de Recherche et Coordination Acoustique/Musique

The IRCAM had a big role in advancing electronic music. In fact, it was the institution that had commissioned Jonathan Harvey to create Mortuos Plango. The IRCAM encouraged collaboration among both engineers and composers alike. 

A Musical Powerhouse

“Composers seeking new sound structures with new instruments know that they can find the most powerful equipment, and freshest ideas, at the institute.”

IRCAM Musical Workstation

Prior to the invention of the IRCAM Musical Workstation (IMW), the synthesis of music was often split between two machines. One machine was designated as the orchestra, responsible for the generations of sounds themselves, and the other as the score, responsible for controlling the sound generators. The IMW aimed to bridge this dichotomy between the two machines and enable real-time production of music.

4X System
4X System using several subsystems

Audio Signal Analysis

In addition to the hardware advancements the IRCAM achieved, they also worked on sound analysis methods and worked on developing audio signal analysis. The purpose of audio signal analysis is to extract musical parameters from instruments including pitch tracking. According to a paper written by Cort Lippe, Zack Settel, Miller Puckette and Eric Lindemann in 1991, they worked on tracking a variety of different elements. These include portamento, glissando, trill, tremolo, flutter-tongue, staccato, legato, sforzando, crescendo, etc.

IRCAM is widely known as a place where collaboration between engineers and musicians exchange and produce ideas, and it has undeniably been critical in the advancement of electronic music in the 21st century.

 

Sources

Roads, C., and Max Mathews. “Interview with Max Mathews.” Computer Music Journal, vol. 4, no. 4, 1980, pp. 15–22. JSTOR, i. Accessed 9 Sept. 2020.

Schreiber, Barbara, “Max Vernon Mathews” https://www.britannica.com/biography/Max-Mathews

Harvey, Jonathan. “‘Mortuos Plango, Vivos Voco’: A Realization at IRCAM.” Computer Music Journal, vol. 5, no. 4, 1981, pp. 22–24. JSTOR, www.jstor.org/stable/3679502. Accessed 10 Sept. 2020.

BBC Radio, Cut And Splice 2005. http://www.bbc.co.uk/radio3/cutandsplice/mortuos.shtml

Machover, Tod. (1984) A view of music at IRCAM, Contemporary Music Review, 1:1, 1-10, DOI: 10.1080/07494468400640021

Music Cort Lippe, Zack Settel, Miller Puckette and Eric Lindemann, “The IRCAM Musical Workstation: A Prototyping and Production Tool for Real-Time Computer”