2020

June 18, 2020

Samuel Lopate ’20 created a virtual reality music program as his senior capstone project. Below is an interview with him about his creation. For more information, visit the following links on his channel:
Comprehensive demonstration of the platform:
https://youtu.be/hu_25tphxXE
Longer form song/improvisation:
https://youtu.be/9oR24XXfPQA
(More to come on this channel!)

Q: Thanks for agreeing to do this interview! What was your major and what is this project about?

A: I’m a computing in the arts major on the music track. This was my senior capstone project. I worked with Scott Petersen and Konrad Kaczmarek, my two closest advisors, on this. I wanted to do this VR Project because I hadn’t worked with VR before, but I knew that CCAM had all these resources, and I’d heard good things from some of my friends who worked there. And it just so happened that Scott had also gotten into VR for his studio; so, I thought it was a good time to try getting into VR. I had this idea for a music creation environment, which, speaking about the conception of the project, was inspired by some different apps you can get on your phone where you can touch different pads and buttons on your phone of make simple loop based music. I liked that these apps were really intuitive and easy to use, and you didn’t need musical experience and just intuition of how to use them in order to make good music. You just had to mess around for about 15 minutes and end up with something pretty cool. And I wanted to bring that concept into VR where people that don’t have music experience or haven’t used VR before can be placed into this environment and start messing around with different objects in this 3D space and end up making some cool music.

Q: What were some of the other motivating factors behind this project?

A: So, people in the department do all sorts of kinds of projects. A couple of my friends did live performance with live processing such as using different audio technology tools to do live signal processing and turning that into an improvised performance. That’s one of the things that I’ve done in the past, and people have done for senior projects, but I knew I wanted to build some software just because I’ve been wanting to get my feet wet more with software in the past couple years. I’d been impressed with VR and some of the things I’d seen. One of my friends had shown me Soundstage, which is another platform for music creation in VR, and I thought that was pretty cool. I liked that and I wanted to do something like that with my own ideas of how to incorporate musical ideas into virtual reality.

Q: Can you explain in more detail what your project is? What can it do or what can be done with it?

A: Yeah, essentially the idea is to have a simple and easy way to get into an interface to start making music. Basically, how it works is you’re placed into this virtual room and you can look around, you can walk around and there’s a few different stations for instruments that you can walk to and interact with. They all take on different roles, similar to how you might label different synthesizers. You have what’s called a lead instrument that generally plays the melody, and you have a bass, you have a sort of chordal instrument that can be called a pad, which handles the chords, and then you have some sort of drum or percussion sequencer. Those are what I consider the four major categories of synthesis that you could have. So, I built four different kinds of instruments, each covering those categories, and you interact with them in different ways. For example, the melody instrument, or the lead instrument, you’re presented with something like an inverted steel drum. It’s like a semi-circular dome, but you can pick up these mallets, and you can move them around. Then you can hit these pads like you would a steel drum. Those each play a certain note, and you can essentially play melodies that way. You can hold the mallet on the pad, and you’ll get a sustained note; you can hit it once and get a quick note; and you can play them in sequence.

Just to talk briefly of how it works is. So, I’m using Unity for all the visual elements: all the models, all the interactions happen in Unity using standard Unity engines as well as Steam VR (which is what incorporates Unity into VR). But all the audio is happening on a different platform called Max MSP, which is pretty well known among people in audio and music technology and a common tool for all sorts of audio processing. And the reason I’m using that over Unity’s built in audio engine is Unity’s audio engine is not sufficient for the types of things that I wanted to be doing. Namely, it can’t do real time sound processing; so, it can’t respond dynamically to your interactions but Max can. So, what I’m doing is sending messages from Unity to Max that tell Max when to start a note, when to stop a note, which note to play, how loud to play it. All that information is sent via OSC messages, and then Max has another program that’s interpreting all that and then outputting the actual audio.

Going back to the instruments, you can play your melody and then you can hit a record button. There’s a big red record button next to the instrument, and when you hit that, it starts a count in or metronome that gives you a two bar or two measure count in. Then after that, it glows up and then you can start recording. So, anything you play will be recorded, and everything is based on a four-bar loop. You can do longer loops; you can do eight bars or sixteen bars, but I think four bars keeps it simple and manageable enough for people to get into it without being overwhelmed with too long segments. It also makes it so that everything fits together nicely. So, you can play for four bars, and at the end of that, whatever you played is automatically saved and played back. You can put down the mallets, walk away, and that audio will continue to loop. Then you can walk over to another instrument, and that’s when you can start layering parts over one another, which is where it gets interesting.

Like I said, there are four instruments, and they all interact in different ways and with mechanisms. So, for the bass it’s more of like a fret board. So, imagine a guitar, where you can slide up and down a fret. The chord instrument is more of just hovering over the pads and clicking the trigger to activate that chord. And then the drum module instrument is modeled after a drum rack. People that use Ableton would know what that is, where you can load samples into this sequencer, activate them, and turn them on and off. This works in a similar way where you can grab these floating cubes which activates them, and then you can put them into these pedestals that will loop them continuously. You can mix and match different loops so you’ll have bass drum loops, snare drum loops, high hat, and all the different elements of a drum beat. You can mix and match them and place them in different ways. And when you do all of that, you have this symphony of different instruments and sounds. That’s basically how the program fundamentally works. There are more advanced features that I can talk about as well.

Q: Let’s hear them.

A: Well, you have a global control menu where you can change the tempo, and you can also change the key. By default, it starts in C major, but you can change that to any other key. Then there is this other feature that I think is pretty cool. You can have real time control over your audio. Let’s say I go over to the chord instrument and trigger a chord to get that sustained chord. It’s going to sustain as long as I hold down the trigger. But a lot of times what you want to do in music is you don’t want the same sound for the entire sustain: you may want to change that sound dynamically over time. I’m using the HTC Vive controller which has a trigger and an XY touch pad. So, by moving your thumb around the XY touch pad, you can get more data and more values, and by doing that you can change different parameters of the sound. For example, it might change the filtering; so, if I hold down the chord and move my thumb across the touch pad to the right, that might open up a filter so that you have more high frequencies coming through. If you have a lower frequency sound, moving your thumb will open it to higher frequencies. If you do it the other way, to the left, you start to bring down those higher frequencies and get what is called a low pass filter as opposed to a high pass filter.

So, you can change different parameters in real time, and that will be reflected in the sound and the recordings. Then there’s different modes of interactions. You have the touch pad which can control the filtering. The other axis (Y) could control something else, maybe the pitch. You can also move the controller itself. All of this data is being sent to Max. If you rotate the controller, say you start with the controller kind of flat but then if you rotate it to the left, that may have a different effect and add some distortions. And if you rotate it to the right, that adds a different effect. All these things can be changed dynamically in real time, which I feel is an advanced feature even though is may not seem like one. But it can be a bit tricky to get the hang of it. It takes some practice to play the notes that you want all while getting this sort of real time effect that you’re going for. So, I think it’s a pretty cool feature to have. But once you get the hang of it, you can start experimenting with some pretty cool stuff.  

Q: Do you have any future plans for this program? Do you think that this will go any further?.

A: I don’t think it’s done, but I am continuing to develop it because there are a number of features that I didn’t get to that I want to implement while I still have access to the equipment. Beyond that though, I do hope to continue working on it. I don’t have any VR stuff myself; so, I would definitely need to get some of those in the future. It’s something I want to keep developing, maybe into an app or a game, but it’s not at that point yet and it would probably require bringing on other team members and developers to get it to that stage. But if not this project itself, I definitely still want to do something with VR and Max MSP and do more complicated things with audio and VR, which I think is a very interesting paradigm and something I’ll continue to work on.