Randomly Generated Lofi in SuperCollider

Intro/Ideas:

My original goal for this project was to create a (semi-)randomly generated classical piece in SuperCollider. However, as I began coding/trying to figure out how to use randomness, I decided to create a lofi-type piece instead because it’s more specific, (in my opinion) compositionally simpler, and there’s more opportunity for different SynthDefs.

Rewinding Memories on Spotify
“Rewinding Memories”, the piece I was most influenced by

I listened to a few random lofi songs from the Spotify playlist “lofi hip hop music” to get a better grasp of the chords, the structure, etc. Songs I listened to (with notes I wrote down):

  • “Rewinding Memories” by Refeeld and Project AER: low/pulsing drum, higher clap-type around 0:36, bass, guitar around 0:36, shimmering sound around 0:57
  • “Until The Morning Comes” by softy, no one’s perfect: nice acoustic guitar
  • “Put” by kudo
  • “Under Your Skin” by BluntOne and Baen Mow: interesting sound around 1:50
  • “Morning Dreams” by Mondo Loops

After listening to the songs and playing around on the piano, I decided to only use 7th and 9th chords and to never have the base of a chord be the leading tone (7th note in the scale). I was also hesitant about the V chord because it’s so dominant (and the lofi I listened to was very calm/didn’t really have the typical tonic-subdominant-dominant chord progression), but I ultimately decided to include it with the caveat that it could not occur more than once within the progression. I also came up with a few other rules…

At this point I realized something — it’s hard to balance randomness and having a piece that sounds good! A fully random piece would sound terrible, but I also didn’t want it to sound too contrived (i.e. have the rules be so rigid that the piece barely changed with each iteration/generation).  The chord rules (above) were definitely one of the more contrived sections.

Code!

Next, I started actually coding. I randomly generated a key — using rrand(1, 12) — and a tempo — using rrand(80, 135)/100.

Making Pbinds (generating notes & rhythms)

I decided to start by coding the notes/rhythms for all the different parts (e.g. bass, chords, melody, harmony, percussion). For the chord progression, I just used the .rand, if statements, while loops, and lists. This part wasn’t too bad — I had a few lines to randomly generate which chords would go in the ~chords variable, and then a few more lines for edge cases (ex. if one chord was repeated three times). For the bass (variable called ~bass), I created a ~n_bass (bass notes) variable and a ~r_bass (bass rhythm) variable to later use to create a Pbind. I randomly generated a slow rhythm for the bass (i.e. only quarter notes and slower) using a few lines heavily involving .rand, and then added the chord notes. to ~n_bass. I coded the piano chords (~pc) similarly, with a ~n_pc for notes and a ~r_pc for rhythm. One interesting thing I found out was that 3.rand randomly gives you 0, 1, or 2, but 3.0.rand gives you a random real number from 0 to 3 — this was messing me up because I was multiplying a variable by decimals at one point, but wanted to do (that variable).rand. To fix this, I used .round.

Afterwards, I created the melody (~mel1) Pbind (with the associated variables ~r_mel1 and ~r_mel1) — for this, my rules were that it would have 2-5 notes per measure and be all 8th notes or slower. I also wanted to create a melody’ (~mel1b) that was basically the same as the original melody but more active — i.e. with some added notes (2-4/measure) and a more active rhythm. I also ran into a problem here because I wanted to use .insert but it only works for Lists, so I had to go back to all my initializations and change the Arrays (the automatic class when you do something like ~variable = [1, 4, 0]; ) to Lists.

The following is an example recording of what I had at this point — bass, chords, and melody (so far, with a very basic Synth that I pulled from a previous project; no effects and no percussion).

 

Text in post window
Post window

The above picture is an example of what I had in my post window (not corresponding with the above recording): the key (+ 12 would be 12 semitones up from C, so still C); the chords (the progression is 3-3-0-5 (so IV-IV-I-vi), and the fifth note represents the inversion (here, since it’s 0, the chords are all in root position); the melody notes, and the harmony notes.

Making Synths

For the bass, I used PMOsc.ar and applied a low-pass filter. I wanted to have three other distinct melodic sounds: one for the chords, one for the melody, and one for the harmony. For the chords, I wanted the sound to be relatively muted/not too twangy (like the beginning of “Morning Dreams” or around 0:40 in “Under Your Skin”), so I mixed Formant.ar and SinOsc.ar, and then applied a low-pass filter at 400 and a high-pass filter at 200. For the melody, I used the same SynthDef as for the chords, but I changed the envelope, filter, and reverb settings. Lastly, I’d really liked the acoustic guitar in “Rewinding Memories”, so I used a mixture of SinOsc.ar and Pluck.ar to create a guitar-like SynthDef.

For the percussion, I also wanted to emulate the sounds in the songs I had listened to. I created a kick based off of the one in “Rewinding Memories” (see picture below).

Code for "kick" SynthDef
Code for “kick” SynthDef

I struggled for a long time on the higher, snare-type sound. I really wanted it to sound like the higher percussion in “Rewinding Memories” around 0:45 (which I realize probably isn’t a snare, but I called it a snare for lack of a better word), but mine ended up sounding pretty different. Overall, I was the happiest with my kick and my guitar sounds.

Other Synths

I created a “compression” SynthDef using Compander.ar, and a “dynamic” SynthDef that (per its name) takes a Pbind and changes its dynamic level.

Percussion Pbinds

After creating the percussion synths, I coded the rhythms similarly to how I did the bass — I didn’t want the drums to be too active (because the lofi beats were all pretty mellow), so I restricted the rhythms to 8th notes or slower. However, I did include the possibility for one 16th or 32nd in the kick (see picture).

In the picture to the left, there is a 3/5 probability that there is a 16th note or 32nd note — i.e. that we randomly choose one of the notes in ~r_kick and split it into a 16th or 32nd note and another duration (that when added to the 16th or 32nd note yields the duration of the original note we split).

The following is an example of what I had at this point: chords, kick, snare, bass, and melody.

Ordering/Structure:

I didn’t really know how to use randomness in the structure while still maintaining a semblance of a regular song progression, so this part also had a more rigid structure: I just completely wrote out two different possibilities (one starting with the chords, and one starting with the melody/bass), and then used .rand to randomly determine which one to use each time. The songs I listened to tended to start quietly, with just one or a few sounds, and then build, and then would suddenly drop out, build again, and then slowly fade out.

Option 1: starting with chords

Option 2: starting with melody/bass

Editing/Progression of Recordings

Note: if you’re going to listen to one, listen to recording 13 (my favorite of the ones below)! I also like the second halves of both 11 and 14.

Recording 7:

 

Recording 8: added a pan to the guitar/harmony (which comes in around 0:50)
note — needs more reverb/notes need to last longer

 

Recording 10: added quieter dynamics to the beginning/end, added compression around this point (might have been after this recording, not sure)

 

Recording 11 part 1: edited bass length
note — chords sound too gritty/full, especially during middle section — they’re detracting from the other instruments

Recording 11 part 2: I liked the harmony (technically melody 2) in this one, which comes in around 50 seconds to the end, or 30 seconds into this recording.

 

Recording 12: edited dynamics, added code to randomly remove 0-2 notes from each chord; I didn’t like the melody of this one as much, but I liked the chord progression

 

Recording 13: added more editing on the chords; one of my favorites!

 

Recording 14: just another example (split into 2 because too long)

 

Closing thoughts:

Like I brought up earlier, this project was definitely about striking a balance between true randomness and creating a piece that sounded good. I definitely think that choosing a lofi piece (rather than sticking with a more traditional/classical genre) made it easier to have randomness, since there was more freedom for the melodies and the chord progressions. Overall, the piece is far from random — it has my style/traces all over it in the choices that I made (for example, the rules for the chords, or the rhythm durations for each instrument).

Extensions

There were a few things I wanted to do that I either couldn’t figure out or ran out of time for/decided not to prioritize:

  • using Markov chains: I was debating whether or not to use Markov chains (specifically, ShannonFinger, which is part of MathLib) from the beginning. My initial reluctance was because ShannonFinger is based off of some sort of data that you input, so whatever the product turned out to be would be very similar to the data I chose (and I didn’t want that — I wanted something entirely new/more unique). However, if I had more time, I think it could be beneficial (melodically) to use ShannonFinger for the melody and the harmony to make it sound more like real music — the only problem is that I would have to figure out how to use that while still keeping the notes primarily in that measure’s chords.
  • fade effect: I wanted to figure out how to make the volume of a Synth fade without having to create a near-identical Synth where I edited the \amp. I tried creating a SynthDef (in fact, this is where my “dynamic” SynthDef originated), but couldn’t figure out how to change the argument in the midst of applying the effect to a sound (i.e. Pseq didn’t work like it does in Pbind). I also tried to figure out if I could apply an FX synth to the middle of another synth (nope), and even tried using CmdPeriod.run to completely break off the sound (but that threw errors).
  • figuring out more randomness in terms of structure of the piece
  • continue to clean up the sound of the instruments
  • opening & closing ambience: a lot of the lofi pieces I listened to had atmospheric noises (air, creaking doors, etc.) in the beginnings and ends that I really liked. A SynthDef to make a creaking door sound or a rain sound would be an interesting project
    • in a similar vein, some of the pieces I heard had sort of “shimmering” sounds that would be interesting to try to replicate in SuperCollider

MIDI & Waveform

Concept
I started by using the keyboard and trying to come up with chords that I liked. I came up with a chord progression and a melody, and I put those into MIDI (which took a long time!). I experimented with a few different sounds on Subtractive — I wanted something relatively clean/natural-sounding (because I liked how the piece sounded on piano), so I clicked on “Clean” under the character category and found Classic PWM JH (which I used for the chords — though I did end up later adding some filters and it no longer sounded as natural). For the melody MIDI, I used “Turn Me Loose Gliss Brass” – it had a more pointed feel, which I liked for the melody.

2 MIDIs – Melody & Chords

  • I wanted the piece to start out relatively quieter, and I thought that the chord MIDI was too brassy, so I applied a low-pass filter (which was very effective!). I additionally applied the filter to the melody MIDI, which was also a little too brassy.
    • I wanted the second 4-measure phrase of the melody to be brassier, so I increased the cutoff frequency
  • In the chords MIDI, I wanted the 3rd and 4th 4-measure phrases (counting from when the melody comes in) to be lighter, so I used a high-pass filter to reduce the lower notes; however, I wanted the last D major chord before the next section to stand out, so there I increased the low-shelf gain
  • I also played around with panning in the beginning, having it go from subtly left to subtly right a few times (using “Automation Write Mode” in the bottom right corner)

    Panning in the beginning
  • I randomized the velocities for the MIDI clips (but couldn’t completely hear the difference)
  • There were specific notes in the chords (around middle C) that I wanted to emphasize, so I used an EQ filter and changed some velocities
  • I experimented a little with “Nonlinear Space” (because I wanted to experiment with the space of the sound), but wasn’t a huge fan of how it made the MIDIs sound, so I turned it off

Percussion/Sounds

  • I used “CM 7 Kicks” from the SSLIB because I liked the sound — I used the Time Warp filter to change the rhythm to fit my project
    • When I added reverb, there was a high pitched snare-type noise, so I used EQ to emphasize the lowest frequencies
  • I also used “Ambition Beat” from Waveform itself – I wanted to emphasize the higher-frequency noises, so I used EQ (see picture below)
  • I added a “whoosh” noise (SWEEP003 from SSLIB) to end the first section, and then added an “A” played on the harp (also from SSLIB) to add some texture/variation to the second section

Challenges/Issues/Errors

  • With the melody MIDI (Turn Me Loose Gliss Brass), at first when I had two notes at the same time it slid from one to the other (ex. I had a B and then added an A halfway through, and it slid from B to A) — to fix this, I changed “Glide” in Subtractive to 0
  • Errors
    • I had a weird error: with the notes [c bflat c], the b flat wasn’t playing; to fix, I quit and re-opened Waveform
    • I had a strange error when I was quitting Waveform — no idea why it happened (see picture below)
  • For a while I was writing automation but wasn’t able to hear it — I realized that I had turned “Automation Read Mode” off
    • in the picture to the right, the top left is “Read Mode”, and the second button in the first row is “Write Mode”

Overall

Overall, I enjoyed the freedom of using MIDI to be able to write my own melodies/chord progressions. However, it was definitely a lot of work to input each note individually! Hopefully, in the future, I’ll be able to use a keyboard or other instrument to input notes. With more time, I would have added strings and other instruments to the project and experimented more with the settings within Subtractive itself.

The Final Project

HW 3: Waveform 11 Project

Start/Inspiration/Loops

I watched some of the Waveform videos on YouTube and listened to many of the samples included with Waveform (and began to make a list of ones that I liked). I also downloaded the Imagina Drum Loops but didn’t end up using any of them.

I later listened to some samples in the SSLIB, and I was interested by the many variations of chirping birds. I’d already decided to have a relatively mellow song, and the birds inspired me to lead the listener through someone’s morning — a story. The birds start chirping, and a gentle jazzy piano comes in; later, drums (and then bass) come in, and there are sounds of running water and a razor (both of which I recorded in the bathroom using Voice Record) in the background (as the person is getting ready for school/work/their day). The music builds, but just as the listener anticipates the climax, a doorbell rings (a sudden interruption to the person getting ready). Everything goes quiet, except for the bell echoing. The birds slowly begin chirping again, and the piano from the beginning softly returns. The project ends with a car revving (a sound I also found in SSLIB).

Birds, at the start

The birds are most clearly chirping in the beginning and the end, but I used (and altered) the clips elsewhere in the piece. For example, when the drums first come in, there’s a sort of ‘whoosh’ on each beat — I took this from a bird chirp. Near the end of this part (right before the piano changes and the bass comes in), I used a quieter part of one of the chirping clips and applied “Step Volume”.

I got all my atmospheric sounds (birds, various bells, ) either from SSLIB or from my recordings. I also got two percussive-type sounds from SSLIB – the sort of rattling sound (which I also used reversed) and a kick drum. Other than those, everything else (piano, horns, drums, bass, the ‘whoosh’ right before the “doorbell”) was from the Waveform library.

Effects/Plugins

  • Reverb: I definitely used this plugin the most. I added heavy reverb to the kick drum and the bass to give them a fuller sound, and I also added reverb to the drums once the bass comes in to make their sound change a little (to distinguish a new section). In addition, I added at least a little reverb to most of the other clips (other than the atmospheric ones).
  • Delay: I liked using this one, too — I used it with the samples that I added heavy reverb to, for the same reasons
  • Pitch Shift – the black part on the right

    Pitch Shift: I used this sparingly because I often found that shifting the pitch resulted in the sample sounding very unnatural. I did, however, like using pitch shift at the end of a sample and shifting it down (see picture, left) — it added a sense of momentum/sudden change.

  • Warp Time: I used this once on the second recurrence of the chirping in an attempt to make the chirping on the beat — it worked, but there were a few places where the transition between one chirp to the next seemed sudden or cut off (though I think this is more of an issue with the way I “warped” the time than the way the effect itself works). See screenshot of orange clip, below.

    Warp Time – the white markers at the top
  • LPF (and 4-Band EQ, which I basically used as a HPF): I added a LPF to the track with bird chirps in an effort to reduce low-frequency sounds — honestly, not sure how effective this was/if there was a noticeable difference in reducing such sounds; same with 4-Band EQ. I had one bell in the “birds” track for which I definitely heard a change (a loss of the low frequencies) — I moved this bell to a different track because I actually wanted to preserve the low frequencies there.
  • Step Volume: This one does volume in “steps” — i.e. volume is on for a section, and then off, then back on, then off, etc… I only used this once, but I enjoyed the effect it gave! It entirely changed the clip and gave it a more beat-like vibe. See picture of blue clip, right.

    Step Volume
  • Compression: I tried applying compression but honestly didn’t really understand the effects/hear a difference

Automation

I was definitely scared/confused about automation at first, but I ended up using it frequently because it gave me more control over how each plugin was applied. For example, in the picture above (orange clip “GO_BirdsVarious-12”), I used automation on the volume because I wanted it to fade in, get softer for a bit, and then rise in volume again. Another clip I used automation on was “GO_BirdsVarious-4”, when I just used the surrounding noise instead of the clear bird sounds themselves. This was the clip I used “Step Volume” on, and I used automation on “Volume” to make it gradually fade in and then fade out.

Volume Automation on “GO_BirdsVarious-4”

Notes/Errors

  • I had one error: I had headphones in and unplugged them, and then plugged them back in. My solution was to close & re-open Waveform.

    Error message
  • a note: some of my effects disappeared when I merged looped clips together, so I could no longer see what plugins I had added — I’m definitely going to avoid merging looped clips together until the end from now on

The Project

HW2: Recording with the iPhone SE

The iPhone SE (2020)

My iPhone SE has two built-in microphones (one at the top, and one at the bottom) and stereo playback. The bit depth is 24 bits and it has 48 kHz playback. Interestingly, “left and right channels are inverted when playing back music in landscape mode” (dxomark.com).

from Apple, https://www.apple.com/iphone-se/specs/

Finding an App

I found “Voice Record Pro” on https://www.popsci.com/record-better-smartphone-audio/, an app that allows both manual gain control and an option to enable iOS’s automatic gain control. Before starting a recording, it allows you to set various settings such as sample rate, bit depth (max. 32 bits), channels (ex. stereo), encode quality, and record format (the last of which include .mp3, .mp4, and .wav, among others). In addition to the settings shown in the first picture (below, left), there is an additional screen of settings so that the user can customize their experience (below, right).

      

I was also considering Spire, which had good reviews on the App Store. In addition, an article described the gain input as adjustable, but I wasn’t able to find the setting on the app. Overall, Spire has a sleeker interface but fewer options for audio settings, so I used Voice Record Pro.

Recording + Analysis

I recorded in two locations: my suite’s common room (with the windows and doors closed to minimize background noise; has some furniture but overall pretty bare) and my closet (very small; has a good amount of clothes hanging). In the common room, I had three distances: near (phone right next to the computer, with the bottom of the phone closest to the computer speaker); medium (phone and computer about 2/3 of the width of the room away); and far (maximum distance away in the common room – computer in one corner on the floor, and phone in the opposite corner on a high shelf). In the closet, I had two distances: near (defined same as the common room) and medium (defined as the common room’s “far” – in two opposite corners; I called it medium because the distance between the phone and computer was significantly shorter than the “far” I used in the common room).

The record settings I used are the ones shown in the picture (above, left) with the exception of bit depth, which I changed to 24 bits. There was definitely some background noise, especially for the common room recordings — at least twice a car drove by, making a noticeable sound.

Recordings & Graphs

Recordings can be found here; the Google Doc with the tables + graphs is here.

Sine Sweep

Graph of original/generated sine sweep, made using Audacity
My sine sweep graphs

Analysis

  • The “common room, near” looked the most similar to the original graph, though the “common room, near” graph tapered off at the end (whereas the original graph stayed around the same level)
  • I was surprised by how different the “common room, near” looked in comparison to the “common room, medium” and “common room, far”; in addition, I would have expected the “common room, near” and the “closet, near” to look relatively similar (as they do for white noise – graphs below).
  • All 5 graphs seemed to reach a relative minimum (in dB) around 700-800 Hz. Before this point, the graphs were relatively smooth; after this point, the graphs became “bumpier” (more variation in the same time span). The original graph seemed to reach a relative minimum slightly later, around 800-900 Hz.
  • The peaks for the “common room” recordings varied between all five recordings. The two closet recordings’ dB levels both had relative maxima around 400 Hz, but the “closet, near” recording actually became louder at the end — I would guess that this was from background noise.
  • Overall, there is relative spectral flatness (especially compared to the white noise graphs)

White Noise

Graph of original/generated white noise
My white noise graphs

Analysis

  • For the white noise, the overall shape of the graphs of the three recordings from the common room seemed more similar – though the beginnings were still slightly different
  • The differences between the original and these graphs seem greater than the differences between the the original and my graphs for the sine sweep (especially comparing the original graph and the “common room, near” graph)
  • The two “near” recordings (one in the common room and one in the closet) had the most spectral flatness; the other three varied significantly more
  • These white noise graphs seem to have more fluctuation than the sine sweep graphs

Overall

  • The “near” recordings tended to have the strongest recording quality and be the most similar to the original graph; in addition, they had the most spectral flatness
  • However, overall my phone’s recordings varied significantly from the generated recording, suggesting that factors such as background noise influenced the recording quality

Sources

De Hillerin, Marie Georgescu. “Apple IPhone SE (2020) Audio Review.” DXOMARK, DXOMARK, 29 June 2020, www.dxomark.com/apple-iphone-se-2020-audio-review/.

Nield, David. “How to Record Better Audio on Your Phone.” Popular Science, 30 Sept. 2018, www.popsci.com/record-better-smartphone-audio/.

Specifications, Device. “Apple IPhone SE – Specifications.” DeviceSpecifications, 2020, www.devicespecifications.com/en/model/7a423ad7.

HW 1: Computer Music History Research (Ge Wang)

You may not have heard of Ge Wang before, but you’ve probably heard of some of the projects he’s started or been involved with. From the mobile apps Smule, Ocarina, and Magic Piano to laptop orchestras and the programming language ChucK, Wang’s work has had wide-ranging impacts on the relationship between technology and music.

Ge, taken by unknown.

Ge was born in China in 1977 and moved to the U.S. as a child for his father’s work. He was always interested in music, but his parents encouraged him to study something more practical. He majored in computer science at Duke and then entered Princeton as a graduate student. There, he developed the programming language ChucK, which was specifically tailored for computer music and designed to be easy to read and write. In addition, ChucK introduced a “strongly timed” model that allowed programmers to both “flexibly and precisely control the flow of time in code” (The ChucK Audio Programming Language | PhD Thesis) and to develop programs spontaneously (i.e. as they ran). These features, along with the “Audicle” environment, made it possible for “live coding” – a phenomenon that, while relatively new, is nevertheless interesting and significant for a study of the history of computer music.

Example of live coding (from here)

Live coding emerged in the 21st century with the development of languages such as ChucK in 2002 (and, a decade later, Sonic Pi, which was mentioned briefly at the end of class). It is a performance practice where a programmer codes to make sound, and edits the code during the performance so that the sound being produced also changes; for examples of this, see here (Ryan Kirkbride using Python/SuperCollider) and here (Sam Aaron using Sonic Pi). Its existence speaks to how much computer music has progressed simply in the last half-century.

Building on his work with ChucK and the live performance of technology and music, Ge Wang has also figured prominently in another recent computer-music development – laptop orchestras. In fact, he actually used ChucK as a teaching and development mechanism for the Princeton Laptop Orchestra (PLOrk), a laptop orchestra that he co-founded as a graduate student in 2005. Later, after he became a professor at Stanford, Wang started the Stanford Laptop Orchestra (SLOrk). But what is a laptop orchestra?

Well, a laptop orchestra is an orchestra of laptops — a group of people who use laptops (and other various technologies) to create music. In the SLOrk, the fifteen students each receive a MacBook connected to a speaker, a pillow, and a game controller called a GameTrak that turns physical movement into numbers. There are also physical instruments, wooden blocks, and Ikea breakfast trays, according to Arielle Pardes’s inside look for Wired. How the instruments work varies widely, as each student is meant to create their own during the term. One past project involved hanging GameTraks on a beam and pushing them like swings to create a song. Another used FaceOSC to convert facial movements into sound. Due to the randomization that is often involved, the music produced varies widely as well, even potentially for the same song. Below are two examples of performances by the Stanford Laptop Orchestra; the first is from a 2013 performance where, in fact, they actually played a piece by Wang; the second features a piece composed by students (under the teaching of Wang).

Laptop orchestras, ChucK, and the invention/popularization of live coding performances mark a distinct change from how music was produced using computers during the mid- and late 20th century. Further, they speak to how much technology has become “smaller, faster, and cheaper” (as mentioned in the slides last week) — instead of huge, clunky machines, we now have laptops that are relatively affordable and provide immense opportunities for music composition and production.

Ge using his Ocarina app; taken by Timothy Archibald, featured on IEEE Spectrum

Outside of teaching and conducting laptop orchestras, Ge is also active in coding mobile apps. In 2008, Ge co-founded Smule with Jeff Smith, a popular app that serves as a simplified recording studio and place to film videos of yourself singing and an easy way for people to make music with limited equipment at home. There are options to sing a duet with someone else who already recorded the part, add filters, and see the lyrics and a pitch tracker. Today, Smule has over 200 million users, and though the music produced/available may be relatively simple and pop-oriented, the existence and popularity of the app are an important representation of how technology and music interact with each other today. Under the umbrella of Smule, Wang also invented an app called Ocarina. It was inducted into the Apple Hall of Fame in 2010 and allows the user to play the phone like the ocarina (an ancient flute) by blowing into the mic and tapping buttons on the screen.

His most recent endeavor has been writing a book about his experiences in music and technology, and how the two topics both shape each other, design, and our society. More information can be found at https://artful.design/.

Sources:

Pardes, Arielle. “The Aural Magic of Stanford’s Laptop Orchestra.” Wired, Conde Nast, 8 June 2018, www.wired.com/story/stanford-laptop-orchestra-tenth-anniversary-concert/.

Perry, Tekla S. “Ge Wang: The iPhone’s Music Man.” IEEE Spectrum, 2009, spectrum.ieee.org/geek-life/profiles/ge-wang-the-iphones-music-man.

Solsman, Joan E. “Smule May Be the Biggest Music App You Haven’t Heard Of.” CNET, CBS Interactive Inc., 14 July 2018, www.cnet.com/news/smule-is-the-biggest-music-app-you-never-heard-of/.

Wang, Ge. The ChucK Audio Programming Language “A Strongly-timed and On-the-fly Environ/mentality”. 2008. Princeton University, PhD dissertation. Princeton Department of Computer Science, https://www.cs.princeton.edu/~gewang/thesis.pdf.

Wang, Ge. Curriculum vitae. https://ccrma.stanford.edu/~ge/cv.pdf.