Creating the “Beatboard”

The Beatboard™ (Prototype v0.1)

Initial Inspiration

For my final project, I was inspired to construct an E-Z music making machine. For drummers like me, music with notes can be hard. I can wrap my head around simple roman numeral chord progressions, but when I want to make music live, I find my process of mapping harmonies to actual notes or chords on the piano to be prohibitively slow. By the time I can actually figure out where to put my fingers, I’ve already lost my creative train of thought. So, instead of going through any semblance of real musical training, why not engineer a solution to make being a musician a little easier? With the seedlings of an idea, I started to sketch out some concept designs. Here are a couple (out of many):

Initial design ideas

The main interface I converged on centered around an array of buttons that would allow me to select different diatonic chords of a certain key based on their root. I’d organize the buttons by ascending scale degree—instead of note name—so I’d be able to think in roman numeral harmonies as I played. Together with some knobs and sliders to tune different parameters, as well as two screens to display different instrument choices, the current key, and the current tempo, most of the interfaces I’d design looked were organized something like the following:

Main interface

I figured that this sort of instrument would best lend itself towards being more of a groove-maker or a drums&bass backtrack, so I started designing with that goal in mind. I also had several ideas for adding controls for some sort of lead voice (flex resistors, gyro sensors, a 12-channel capacitive sensor wired to an assortment of musical fruits), but I figured I’d save them for a later revision. I also realized I’d need a project name. “Beatboard” sounded nice to me, and since then I’ve stuck with it.

Choosing the Technology

Now that I had a somewhat fleshed-out idea of what I wanted to make, I started thinking about how I’d want to implement it. One important detail is that I wanted the box to be completely independent: plug in a speaker or a pair of headphones and play, no computers attached. This pretty much ruled out Arduino as the main brains of the machine, as implementing sound synthesis in bare-bones C on an 8-bit micro-controller with only a few KB of program memory sounded like a nightmare. Instead, I turned to Raspberry Pi. Pis are miniature single-board linux computers that can fit in the palm of your hand. They’re cheap ($10 for the model I’m using), accessible, and fit my project specifications very well: with an assortment of GPIO (General Purpose In-Out) pins, I’d be able control the hardware I needed, and as a computer running a full operating system, I’d be able to run our beloved SuperCollider as a synthesis engine. I’d also been wanting to become more comfortable with Raspberry Pis for a while, and this seemed like the perfect opportunity to do so.

With the project platform decided, I started organizing a general block diagram of the different components I’d need. Here’s an early draft I drew:

Block diagram

These groups components deserve some individualized attention, so I’ll go into a bit more detail about each one.

Hardware Components

The Raspberry Pi

Raspberry Pi Zero, with Quarter for size

I’d decided that I wanted to use Raspberry Pi for this project, but that still left me with the decision of which Raspberry Pi to choose. Just like Arduinos, Pis come in a variety of sizes and capabilities. Since I wanted this project to eventually fit in a small handheld box, I went with the Raspberry Pi zero. It’s quite tiny, and although it lacks a bit in processing power compared to its older brothers (more on that later), its still equipped with the same 40-pin GPIO as the larger models. You can get a Pi Zero for as little as $5, but I shelled out an extra $5 to get the version with WiFi built in—which came in handy as I was rapidly beaming it code updates from my laptop.

While Raspberry Pis are surrounded by a wonderful community of makers supplying countless documentation pages, tutorials, and help forums, I still had some trouble with some of the initial setup. After flashing a desktop-free, lite OS to the SD card (as I’d just be running the thing headless), I spent an entire day figuring out how to configure the Pi to automatically connect to my WiFi without hooking it up to a screen and keyboard. Several hours of pain and a few hundred Chrome tabs later, I found myself on the other end with a sufficiently self-initializing computer and a far more intimate understanding of linux network device setup than I had ever really wanted. After some extra hassle enabling kernel modules for the various hardware serial buses I was going to use, I had the brains of the project prepared for the time being.

The Pi connected to all its friends. How cute!

Audio Out – The Digital to Analog Converter

Choosing the Pi Zero came with one particular caveat: it has no audio jack! This left me with several options: I could 1) Extract the audio from its miniHDMI port, 2) Use the Pi’s PWM pins and a basic circuit to hack together an audio signal, or 3) hook up a DAC over some serial bus (DACs, or Digital-to-Analog Converters, convert a digital signal where audio is encoded as a stream of bits to an analog signal of varying voltage that can drive a speaker). Option 3 seemed like the the most reliable and easiest to implement, so that’s what I went with. While I could probably have hooked up an off-the-shelf USB audio adapter to the Pi’s single microUSB port, that option would have involved several adapters and felt a bit clunky. Instead, I found this awesome board built specifically for Pi Zeros. It includes a quality DAC and amplifier packaged into a compact PCB that plugs straight into the Pis GPIO header—no wiring necessary. It communicates to the Pi over I²S, a 3-wire serial protocol based on pulse code modulation. This was perfect for my purposes, as I’d soon learn that GPIO pins were a limited resource. I’d need to keep access to the Pi’s pins for the rest of the hardware, so I wouldn’t be taking advantage of its convenient plug-and-play functionality, but by looking at the board’s circuit diagram I was able to figure out which pins the board needed to function, and wired them up manually. Here it is, integrated into the current prototype of the machine:

I2S Audio Board

Analog In – Reading Potentiometers for Knobs and Sliders

Another important feature the Raspberry Pi lacks is an integrated Analog-to-Digital Converter (or ADC). ADCs encode voltages as digital values, and are essential for reading the types of knobs and sliders I would use for the project. These knobs and sliders are all potentiometers, variable resistors which can be hooked up as a simple voltage divider, which yields varying voltage as the potentiometer’s resistance changes. Rotational potentiometers can be used as knobs, and linear potentiometers can be used as sliders. In order to read these voltages, I used a small IC called the MCP3008, which can read 8 separate channels at 10-bit accuracy (encoding a voltage from 0-3.3V as number from 0-1023). This little chip communicates to the Pi over a serial protocol called SPI, a somewhat annoying and antiquated spec that eats up another 4 of my precious GPIO pins. The Pi includes integrated hardware for SPI, however, so communicating with the IC wouldn’t be too much of a pain (no bit banging needed!). In the end, I hooked up my 8 analog channels to five knobs, one slider, and the two axes of a joystick that I’d use to navigate beatboard’s menu. Here’s the chip, wired up to its children:

The MCP3008 ADC

The Screens

Here’s a surprisingly easy one! The two screens I chose to use communicate over the lovely I²C protocol (not to be confused with I²S, the sound protocol), which uses just two wires for data. The best part of this protocol is that practically limitless devices can be connected in parallel to the same two data lines—each device is given a unique 7 bit address that the host device can target messages to. This makes wiring a breeze, and saves on the GPIO pin usage! As an added bonus, both screens are OLEDs, giving them low power consumption and high contrast, making them easily readable even at low resolutions. Here are the two screens with some text on them. The left screen has the main instrument selection menu, while the right screen has some general system info. I didn’t have time to implement the right screen in the program, but I intend it to display the current key and tempo.

Monochrome OLED screens, with 128x64px (left) and 128x32px (right) resolutions

The Button Matrix

I’d designed a board that required an array of 21 (3×7) buttons, but I certainly didn’t have 21 GPIO pins left to read them. A common way to circumvent this issue is to wire the buttons in a matrix, connecting all of one side of the switches together in columns, and all of the other sides together in rows. That way, the Raspberry Pi can just read all the buttons with just 3 rows + 7 columns = 10 pins! This setup requires some intricate shenanigans to get a proper readout, however. On a given read cycle, the column pins are all set to inputs, while each row is sequentially set — bah! Too complicated to implement, here’s a python library that does it for us.

An early test prototype of the button matrix. Some button matrix designs require diodes for a proper readout, but the Pi can trigger its IO pins fast enough that they aren’t required.
A closer look at the button matrix in the current prototype

Putting it All Together

For my first prototype, I wired everything on a large breadboard just to make sure all the components would work together nicely. I’m currently powering it all from the 5V rail of my benchtop DC power supply, which is a convenient—albeit temporary—solution. While I definitely plan to solder everything together and put it in a fancy box with its own battery later down the road, this revision of the hardware was all I had time for in the scope of a couple week’s work. With a few of GPIO pins left over, I hooked up the switch that closes when you click the joystick down as a “menu select” button, and called it a day. I even still have enough pins for an extra row in the button matrix, or an entire second MCP3008 ADC! (Multiple SPI devices can share 3 of the data pins, but each requires its own “chip select” pin to specify who the host is talking to).

The result is the first prototype of Beatboard’s hardware, pictured at the top of this blog post.  Here’s my current working wiring diagram:

My working wiring digram, drawn over a printout of the Pi’s GPIO header

The Software

With the hardware complete, I was left with an entire second world of trouble: actually programming the system to do something. I’d never done any hardware interfacing with Raspberry Pi, so I started with the basics:

Pi Zero LED Blink Test (Click for video!)

Workflow

I was running the pi entirely headless, so after configuring it to automatically connecting to my home network, I’d ssh into it from my laptop for control. I found this to be much smoother than dealing with usb cables and the serial debug window required to work with Arduino. As it comes to the code it self, this project was written in Python (for hardware control) and SuperCollider 3 (for audio synthesis), along with several bash scripts for startup/automation. I kept everything in a GitHub repo, and would pull code to the Pi to execute after pushing changes from my laptop. This was a pretty convenient and easy to implement system, the only downside of which were abundant single-line commits every time I needed to fix a syntax error. (Most of the libraries I was using would only run on the Pi, so I couldn’t do any local tests beforehand).

A screenshot of my typical code upload workflow: SSH terminal on the left, GitKraken on the right

Environment Setup on the Pi

Setting up the dev environment on the Pi was mostly straightforward. Aside from some smaller details, the main challenge was setting up the system audio to work with my I²S DAC. Fortunately, Adafruit, the company I bought the board from included a helpful install script that only required some minor tweaking to get everything working smoothly. Oh yeah, aside from installing SuperCollider. That deserves its own section.

Installing SuperCollider

I’d gotten this far into the project without bothering to check if Supercollider would actually run on my Pi Zero. Genius. I was overjoyed to realize that SuperCollider installation on linux is notoriously difficult (according to this blog post by someone very special). In most cases, you have to build from source, which according to documentation on the SC3 GitHub takes multiple hours on the Pi Zero. Lovely. Luckily, the Raspberry Pi community comes to the rescue! User redFrick has published compiled standalone downloads for SC3 along with step-by-step installation instructions for every single Pi model. redFrick: I don’t know who you are, but I love you. The installation process came with only a few typical snags—my jack2 install conflicting with a apt package that came preinstalled on the system, and some audio bus conflicts with some of superfluous programs installed by Adafruit’s I²S DAC helper script to name a few. In the end, I had sclang and scsynth running on my Pi zero, with audio properly routing through I²S! Never in my life have I been more happy to hear a 440Hz sin wave.

Hardware Control with Python

All the button IO, SPI/I²C serial communication, screen drawing, and menu logic were implemented in Python. Most of the code was centered around Adafruit’s CircuitPython environment, which provides a set of hardware interfacing libraries that are a lot more convenient than the default RPi.GPIO library. All the code can be viewed in the project GitHub repository. (/python/beatboard.py would be a good place to start, if you’d like to take a look).

Python development environment. I used VSCode for everything except SuperCollider.

Audio Synthesis with SuperCollider

This project led me towards a lot of the parts of SC3 intended for live coding, which were super cool to dive into. I have an incredible appreciation for this language—I thought it was rather tedious at first, but I’m starting to really understand how powerful it is as a music creation tool. After a lot of internet searching, I became very close friends with Pdefs, Pdefn, and PatternProxy—all classes I hadn’t touched for any of the psets. On the macro scale, my SuperCollider script defines a lot of SynthDefs and a handful of different patterns, then plays them according to several control parameters that the script exposes to the network (more on that last part in just a moment). The SuperCollider portion of this project is actually still a bit rough around the edges, and I’m looking forward to optimizing and expanding a lot of the code. Several ways I’m looking forward to improve are: 1) saving large libraries SynthDefs to libraries of files on the Pi, 2) Unifying pattern implementation (PatternProxy vs Pdefn for live control? Pdef vs global variables? I’m still figuring out both what’s available and what I like the best). 3) Organizing a more thoughtful control API for other programs to interact.

A screenshot of some SuperCollider code

Bridging the gap between Python and SC3: Open Sound Control

One important task remained: I had to figure out how to get my Python and SuperCollider scripts to actually talk to each other. Luckily (and honestly, to my absolute surprise) there’s a standard communication protocol that SuperCollider uses called Open Sound Control (OSC). Open sound control messages can be sent over a simple UDP connection, and consist of a “target path”—specifying what should be a manipulated—followed by a list of arguments—specifying how the target should be manipulated. SuperCollider actually uses OSC to talk between sclang and scsynth. For my purposes however, I just needed to communicate to sclang, which can be achieved with an OSCdef, which allows you to bind a handler function to a specific OSC target. In the screenshot above, you can see my OSCdef for responding to messages that want to alter the pattern of Beatboard’s bass instrument. One annoying limitation I encountered is that SuperCollider doesn’t support sending arrays via OSC. As a hotfix, I decided just to send the root of each chord instead of a list of notes, but I’d like to come up with a better solution in the future. Meanwhile, on the Python side of things, sending OSC messages was super easy with the pyliblo library.

As an extra note, I looked through several libraries that try to essentially try to replace sclang entirely, though I found that for my purposes they were either too tedious, outdated, or simply unnecessary for this project. This project deserves special mention, though. It seems really cool.

Trouble In Paradise: scsynth Processing Issues!

As I was nearing my goals for this revision of the project, I hit a serious design flaw that I hadn’t anticipated. Long story short, the Pi Zero isn’t powerful enough to run a heavy load of synths and patterns on the SuperCollider server. It can run a simple drum pattern fine, but as soon as I’d throw in a bassline with several PatternProxies, the server would fall behind and the entire system would crash—even causing lag in my ssh terminal. Here’s a video of me demoing my SC3 script by manually sending it OSC messages from the Python console. The drums all work fine, but as soon as I layer on the bass, the whole thing lags and crashes. In the end, I came up with a simple, yet somewhat disappointing solution to this problem. In the current prototype, I run sclang and scsynth on my laptop, and let the Pi zero communicate to them over OSC. In the code, this was a super easy fix, as I just had to target my OSC messages to my laptop’s IP instead of localhost. However, this definitely broke on my goal of making a standalone device. In the future, I’m interested to see if the more powerful Pi’s could do a better job running things, and if optimizing my SC3 code could get things to run on the Pi Zero (PatternProxy seemed like the main culprit, while Pdefn was fine).

(alternative Google Drive Link, if embed doesn’t work)

Results!

Without further ado, here’s what Beatboard v0.1 is capable of!

(Alt Google Drive Link)

Reflection

With easily over 60 hours sunk into this project, I can definitely say that it taught me a ton. I’m a lot more comfortable with Raspberry Pi’s, I brushed up on my python, and I got to dive even deeper in SuperCollider. Overall, making the first prototype of Beatboard has been loads of fun, and I can’t wait to keep going with it. In particular, I’m looking forward to adding more instruments and patterns, adding more complex parameters for the knobs (did I hear user mappable?), and implementing seventh chords and secondary dominants to the extra button rows. For my next prototype, I’d also like to put everything in a nice box, and add some sort of lead instrument control. (Those musical fruits do sound like a lot of fun…)

One thought on “Creating the “Beatboard”

  1. This is awesome! Very impressive use of libraries and integration of Python and SuperCollider. You were able to implement so many things into this breadboard. This looks something that could be a product and I wonder how it could be made into a cleaner version. Lots of cool functionality, thanks for demonstrating this to us!

Leave a Reply

Your email address will not be published. Required fields are marked *