Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

On the Edge, In His Element

Richard Devine on The Art of Manipulating Machines in a Modular World

Richard Devine in Studio A of his modular synth-based production facility. PHOTO: Austin Donohue

Richard Devine is in his element when he’s on the edge. For more than 20 years, the musician, producer and sound designer has stretched the ever-elastic boundaries of electronic music, releasing eight albums on Warp, Planet-mu, and Schematic Records, remixing the likes of Aphex Twin and Mike Patton, coding FFTs in Max/MSP and SuperCollider in his downtime.

He contributed to the Doom 4 soundtrack, programmed sounds for Trent Reznor, and developed patches for Native Instruments, Moog and Korg. But Devine’s imprint goes far beyond music: He’s created soundscapes for apps, gaming platforms, and user interfaces for electronic devices, and he’s done sound design work for Nike, Coca-Cola, Lexus, HBO, Nestle and McDonald’s. When he’s not in his Atlanta studio, Devine is often on the West Coast developing immersive sound environments for tech giants like Google Apple, Sony, and Microsoft. 

That’s by day. At night, he reconnects with his own music, which lately focuses on the unpredictability and immediacy of modular synthesis, machine learning and generative composition. It’s a literal “hands on” approach that defined his most recent album, Sort\Lave (Timesig), a big, warm, mutating collage of glitchy rhythms and richly textured soundscapes, crafted on a mountain of carefully curated Eurorack modules.

The last time we talked, you were finishing up Sort\Lave. You said you wanted to get back to analog and you were sculpting things by hand, never touching a mouse.
I did. I mixed Sort\Lave entirely using outboard analog compressors, EQs, pretty much everything that I would normally use as plug-ins; I just decided, “I’m going to go ahead and do everything with hardware just to see what happens.” It would have been easier in the computer because you have ability to use instant recall, obviously. But for this record I wanted to do everything with my hands and my ears and not really be thinking about music in terms of interacting with a timeline and a screen with the mouse and a computer keyboard. In my humble opinion, I feel like this is my best recording to date because it’s completely mixed with analog hardware.

This record didn’t use any samplers at all. Every hi-hat, percussion sound or synth sound was generated with multiple modular systems chained together, sharing the same clock and tempo. I have gone full circle, as I originally started making music with analog synthesizers in the early ’90s. 

When did you decide you wanted to take a new approach?
About two years ago I decided to take a few hours every single night after putting the kids to bed to focus on creating patches and music. I’d catalog and record, set up my systems here at the studio, and write a piece of music. I was trying to develop a system that would allow me to write very quickly so I could come up with the ideas really fast, or have things set up where I could have some spontaneity; something could accidentally happen that sparks a whole entire composition, so I could drive the rhythmic sequences, the melodies, some of the sound effects, some of the textures and timbres very quickly. 

For about eight months I was figuring out which modules I was going to use for all of the rhythmic sequencing. I wanted to be able to program the data of the rhythmic sequences, and then take that data and repurpose it and play around with it live. Then I could run it through probability-based filters where I could either remove bits of that data or add other bits of data in real time. With all of these stored sequences, I could then alter them and mutate them, or do other strange things to generate new musical ideas.

Are you applying these processes to other projects?
For my new album that I’m working on now, I’m going back into the computer again. I want to use the computer to explore machine-learning based applications. I’m really interested in this idea of developing a machine-learning algorithm that learns things that you like to do, like sequencing in certain time signatures or at certain tempos, and then gives you different outcomes. 

I was inspired by the work of David Cope, an American author, composer, scientist and former professor of music at the University of California, Santa Cruz. His primary area of research involves artificial intelligence and music; he writes programs and algorithms that can analyze existing music and create new compositions in the style of the original input music. I was able to get hold of his books The Algorithmic Composer and Experiments in Musical Intelligence; in them, he discusses basic principles of analysis, pattern matching, object orientation and natural language process. Experiments in Musical Intelligence includes a CD-ROM that contains the documentation for the program SARA, Simple Analytic Recombinant Algorithm, which produces new compositions in the style of the music in its database. 

David’s work has been around since the early ’90s, so this idea is nothing new, but it’s an interesting area to explore; not many people have tried utilizing this approach much with modern music. I think it will be interesting to create an algorithm that you can reward if it outputs something that you like, and then it stores all of these data streams of music in a database. Think of these music data streams as sounds, gestures, and sequences or arrangements and patterns. Then you can pull bits and pieces from that database to create entirely new compositions. 

Wow, that’s pretty wild. We haven’t even dug into your rig.
Well, for this last album I was mainly focusing on using analog outboard gear for the mixing and mastering. A funny thing happened with a few pieces of hardware, like the Manley Vari-Mu and Massive Passive. I actually opened up the Universal Audio plug-in versions, and I would re-create the preset setting and then print out recall sheets. Those would be my presets for the real hardware. I did that with the API 2500 Bus Compressor and the Bax EQ. All of my settings, I could recall for each song. I also did that with the Pultec EQs and my API 550b EQ. It made things really fun. Now there is a much better solution called PatchRat App, which is a studio management app for iPad that can map out your entire signal chain, plus it has a huge database of recall interfaces for almost every manufacturer. 

Devine at work, from floor to ceiling. ‘I have always been a fan of using randomness, chaos, and probability in my work.’ PHOTO: Merlin Ettore

Luckily, all the gear that I have here in hardware, I also have as plug-ins. I could reference back and forth, but also recall things very quickly. I wanted a record that had infinite detail but was also engulfed in this beautiful, warm analog, giant, thick field of sound.

I realized that you just get so much more depth and weight using the analog stuff than you do with plug-ins. I use this box called the HG-2 by Black Box that uses custom input transformers to feed two paths: The main signal path travels through a 6U8A pentode tube stage that drives into the triode stage that follows, resulting in everything from subtle harmonics to full-on saturation. I ended up using it every single track, and it just really brings things to life. 

How did taking this approach inform your mixing process?
I think that by going more simple, I made the mixing process a lot easier. Usually when I would do stuff in the computer I’d have like 64 tracks and just crazy amounts of stuff happening. For this record, I really stripped things down. On average there are 16 to 24 tracks, and I was using Dangerous Music 2-Bus+ and 2-Bus LT analog summing mixers; I wasn’t even using a console. The only role the computer played was the end capture device. To capture material on the way in, I used two nw2s::o16 Eurorack modules; the ::o16 is a 16-channel, balanced line driver interface with 6dB of gain reduction in just a 10hp module. I would then go from the ::o16s into two Universal Audio Apollo 16s. From there I’d take all of the stems and mix them back through the Dangerous Music system. 

Would you say you’re inviting chaos into your work?
Yeah, I have always been a fan of using randomness, chaos, and probability in my work. I would love to explore using the computer for analyzing musical data, and then using machine learning, AI-driven algorithms, then somehow integrating that with my modular systems, integrating the two worlds and see what happens; see if I can get that spontaneity of the instant sort, of the physical interaction you get with the modular. That’s the one thing that the computer has difficulty doing; you just don’t have that immediate feedback that you get when you’re working with a piece of hardware like a real physical instrument.

With a computer you can get similar outcomes by playing samples and things, but the modular gives you so much more. You’re making music with just electricity and control voltages that fluctuate and move around between these cables. It’s such a fascinating way to make music with one of the rawest elements. With a modular system it will never, ever play the same way twice, no matter how many times you perform the patch. There are so many variables that can shift the patch in any direction, like the temperature of the room, drifting of analog oscillators, unstable circuits. It’s like working with a living organism that is constantly moving and mutating. There are thousands of interactions in the little environment of nested cables that you’ve created. One knob twist can shift the whole thing in a completely different direction, and I love that. You just ride this super-fine line of losing it all.

Richard Devine’s synth corner. PHOTO: Richard Devine

 

You’ve been exploring modulars since you were a kid; how did those early experiments inform the way you work now?
I’ve been using these systems since high school. The first modular system I had was the ARP 2600. My 2600 was a Version 1.0 from 1971 with the 3604P keyboard; I bought it from a local pawn shop. I was buying a lot of the early portable modular systems that were made in the ’60s and ’70s, like the EML 101 and the EMS Synthi from England. On my early records I used a lot of the esoteric smaller, portable, semimodular systems available at the time. These are the machines I learned to patch and make music on in the early ’90s. 

I knew that format going into the modern-day Eurorack modular synthesizers, which have become hugely popular over the past five to seven years. I was right at the beginning with that. Eight or nine years ago I started buying and building up two starter systems by Dieter Doepfer, a German Eurorack manufacturer. 

When you’re working with modular synths, unpredictability is the name of the game. Does AI bring a different sort of unpredictability?
What’s interesting about modular is, you’ll come up with really cool stuff, but then you lose it. Even if you take patch notes and get everything perfectly, you’d still never get it exactly right. There are so many variables in putting together large, complex patches, like when I was writing the album. 

I wanted a machine-learning algorithm that would be able to analyze and record some of these things that would happen so I could recall and reuse that data and put it back into the modular systems. Otherwise, it would be gone forever as soon as I pulled the patch cables. I now have modules that can play back exactly millisecond-per-millisecond if I want to. You could do it with harmonies, too.

What do you want to manipulate through machine learning?
I want to develop an engine that can analyze every component of what I create spontaneously here with my system, and then take that data, repurpose it to create more music, and then improve that data, and even mutate it to generate other springboards of ideas. Then it’d just keep building from there. 

It took me almost a year to develop the system to where I can almost compose in real time, set up a couple of variables, set up a couple of modules, start patching, and things start happening. Then I’m composing, and I’m writing in real time and performing; I’m creating a track on the fly. 

I’ve been experimenting with Max/MSP Version 8. During my visits to Google headquarters, I was introduced to Doug Eck, a principal scientist at Google working on Magenta, which is a research project exploring the role of machine learning in the process of creating art and music. Doug’s focus is developing new deep-learning and reinforcement-learning algorithms for generating songs, images, drawings and other materials. 

Doug has inspired me to think about how I could use AI to help me to take ideas, reincarnate them, and then feed them back into my system. Then tripping the algorithms, in a way, just seeing what happens if I skew things or feed it all this nonsense, and then mix it up with all these things that I like. What kind of music would it create; what would I create, with all my favorite stuff jumbled up in this soup of craziness?

Do you find that you’re shifting between macro and micro perspectives a lot? Do you have to let go of certain preconceived notions?
Yeah, exactly. That’s the whole reason I got into creating music on the modular again—the idea that you might be able to get what you have in your head 100 percent, and you might not get it at all. Like you said, you can work it in micro levels, these infinite, crazy little microscopic worlds of sound. Usually with the computer I can get 90 percent of what I have in my head, but that’s kind of boring. There are no variables that will shift you off course. There’s no random, spontaneous thing that’ll explode right in front of you and make you go, “Oh, I didn’t even think to do that.”

For all of this complexity, that concept of letting go is a lot like live recording, the days before multitrack, that feeling of, “This is what I have, and I’m going to get some happy accidents and go with it.”
That’s totally it. We got so far away from that with digital recording. Sure, with bands, you’re trying to get the perfect take, but you don’t get that random spontaneity that you do with the modular system. I just don’t know any other instrument in the world—and I have a lot of strange ones that I’ve collected all over the world—that has that feedback. You’re basically creating this environment. You decide the rules of how that environment is going to react, and then you’re steering this ship of chaos and it becomes alive. It’s like an electrical organism that’s moving in the wires for this one moment in time. You have to record this crazy, alien creature that’s living in these cables before it’s gone. 

How do you apply these processes to sound design?
I use the modulars for creating layers very quickly. They generate lots of great textures that help me with sound design, especially with gestures, like low-impact sounds. My sound effects get used in games and trailers, so a lot of risers, tension-builders, things that create unsettling feelings, the modular is just perfect for that because it can create such alien, strange sounds and timbres. You can work with custom tunings, and scales. Everything is just completely organic, raw, and hands on. 

I use a lot of plug-ins and other pieces of digital effects units. I’m not a purist in any way when it comes to my sound design. I use any sort of microphone: Ambisonic, binaural, stereo, mono, it doesn’t matter. Or any kind of mic preamp, recorder, or instrument. There is no right or wrong way to go about. I will use tanker truck for my percussion sounds or the lid of a garbage can for a snare drum. 

Devine regularly performs live, putting into practice the spontaneous workflow he develops in the studio. PHOTO: Merlin Ettore

How much performing are you doing right now? Are you incorporating machine learning into shows?
Not yet. I did a modular performance at Moogfest and had a string of shows in Europe. Then I’m going to do a short run in China But the newer machine learning-based stuff, I won’t be getting into until later in the year. Right now I’m recording and developing the system that’ll analyze what I’ll do. Then it’ll be figuring out how I’ll synchronize everything to take all that data and feed it back into my system here. 

I’m also re-tweaking my modular system. I’m changing out a lot of things for this new record. It’s all an experiment, really. I don’t know if it’ll be successful or if it’ll be a complete failure, but I have to try it. Kind of like I did with my last record, I had to break away from doing what I had been doing for so long. I’m going to try something different this time and see what where it takes me. 


Sarah Jones is a writer based in the San Francisco Bay Area. She’s a regular contributor to Mix, Live Design, and grammy.com. 

 

Close