Your brain as an instrument

A brain and a vocal microphone

In 1949, electronic musician and technologist Raymond Scott had a vision:

“In the music of the future, the composer will sit alone on the concert stage and merely think his idealized conception of his music. His brainwaves will be picked up by mechanical equipment and channeled directly into the minds of his hearers, thus allowing no room for distortion of the original idea. Instead of recordings of actual music sounds, recordings will carry the brainwaves of the composer directly to the mind of the listener.”

Scott, who is now recognized as the first person to build a sequencer (a device for controlling synthesized sounds) and the forefather of electronic ambient music, was an innovator far ahead of his time. He understood the physics of sound better than most of his peers and saw similarities between sound waves and brainwaves. Since then, many musicians from around the world have begun exploring the possibilities of music creation using brainwaves — typically measured by electroencephalogalography (EEG) — to control a vast number of musical elements. From Alvin Lucier’s “Music For Solo Performer”, created by detecting the brain’s alpha waves during the course of a meditation, to the MiND Ensemble’s exploration of “How can we optimize interaction within a completely intangible instrument?”, there are many exciting possibilities for how brain-computer interfaces can help translate ideas, brain states and emotions into beautiful songs.

73 years later, we have finally arrived at the precipice of the future that Scott had envisioned. With brain-computer interface (BCI) funding reaching over $300 million in 2021 (three times the amount raised in 2019), this rapidly growing technology is going to be revolutionary for many industries, including the creative industry. BCIs have already been instrumental in helping paralyzed individuals to communicate with their thoughts and the vast benefits of music therapy research are still in their infancy. On the other hand, these new technological solutions will help artists to ideate, collaborate, create, promote and release their music much more seamlessly, giving power back to the artists and supporting the ever-expanding freelancer economy.

By now, you’ve probably heard the term ‘metaverse’ and how it will be the future of the internet. The metaverse allows for a more immersive experience of online virtual environments using augmented and virtual reality, as well as enhanced human-computer interaction (such as haptics, motion detection, and BCIs). A crucial part of building the metaverse is how we interact with it. For example, imagine you find yourself in a VR music studio and you have a melody in your head. Typically, if you didn’t have any musical training or background knowledge, it would be difficult to document the melody — without which it couldn’t be shared. But with BCIs, you can imagine the pitches in your head and have a program translate the imagined pitches into ‘Musical Instrument Digital Interface’ notes. In fact, pitches aren’t the only way you can translate imagined music into tangible songs; over the past decade, researchers have investigated controlling and decoding other aspects of music using BCIs, including sound design and composing melodies.

If this sounds futuristic, then I’m here to tell you that it’s not actually that far away. When I began researching how music creators could use BCIs, I realized that the pieces of the puzzle were all there, but putting them together effectively will be the key to proliferating BCI-facilitated music composition. That’s why I founded MiSynth, a revolutionary music technology company that would design music creation interfaces and tools while incorporating BCIs. Our goal is to build and design music creation software for the future, and when that future is realized, anyone who has ever wanted to create music will be able to — simply with their brain. We are working with the next generation of music creators to help them ideate, collaborate and engage better with their fans through the metaverse. As we’ve always known, music is a universal language and if used to its full potential, these technologies will not only help us in terms of creativity and entertainment, but also allow us to be more connected to one another, as well as expressing ourselves with our very own unique, sonic identity.

I believe that brainwave-generated music will become a new form of self-expression and eventually, maybe even its own genre. I call this genre ‘conscious music’ because the music combines both the intentional and the unintentional aspects of creativity into a piece of music that is unique to the individual who created it. Just like your fingerprints, your brainwaves are unique and constantly evolving, so the music that is created by them will be different for each individual. Another interesting aspect is how different brain states could produce different forms of conscious music. For example, a meditative state might produce a certain sonic pattern, while a brain on psychedelics might produce a vastly different sonic pattern. It will be up to the artist to explore what their own brain states sound like and how to create music from it.

Like our voices, our brains are natural instruments. The waves produced by the brain mimic those of sound waves, and the two have close physical connections. With the rapid development of BCIs, we will soon see this new form of musical expression emerge amongst artists and music creators, allowing for more people to participate in the musical conversation and giving more access to those who aren’t able to use traditional music creation tools, such as digital audio workstations. We are entering into a new era, and the future of music creation is very exciting.This is why I believe it’s time for us to reimagine the way in which music can be created and open our minds to new possibilities.

Written by Senaida Ng and edited by Hazal Celik.

Senaida Ng is a Brooklyn-based sound architect, creative entrepreneur and futurist. Through her work as a conscious artist and thinker, she explores the intersection between art and emerging technologies.

Hazal Celik is a researcher in cognitive science, neurotech enthusiast and consultant.

Originally published by NeuroTechX Content Lab.

NeuroTechX is a non-profit whose mission is to build a strong global neurotechnology community by providing key resources and learning opportunities.

Don't miss these stories: