Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now


You Say You Want a Convolution

Reverb used to be simple: a room with a mic and a speaker.

Reverb used to be simple: a room with a mic and a speaker. You had one “preset,” with two “parameters”: mic and speaker placement. Over time, variations appeared like EQing the signal going in or coming out, or delaying the signal to the reverb.

The limits of acoustic reverb led to plate and spring reverbs, but digital reverb provided the big breakthrough. Computing power allowed implementing sophisticated algorithms that defined what happened to a signal as it bounced around rooms, decayed over time, and lost high frequencies through damping. Admittedly, early digital reverbs weren’t particularly sweet-sounding. But for studios without reverb rooms, digital reverb was an improvement—especially since by its very nature, reverb was often low in the mix anyway.

And now, convolution-based reverb is commonplace. Although the technology has been around for a while, implementations continue to improve (including uses beyond reverb). Most DAWs come bundled with convolution reverb.

Convolution reverb is to synthesized reverb as digital sampling is to digital synthesis. Convolution requires two elements: a sample (the impulse response) of the acoustic space you want to model, along with the audio you want to convolve. Convolving is the process of multiplying each signal’s samples by the other signal’s samples. Because this is computationally intensive, convolution-based processing used to be far from a real-time process. For example E-Mu’s early samplers could do convolution, but it was a “push enter and have lunch” process. However, even those time constraints are falling due to clever programming techniques and ever-faster processing. So what’s the final frontier for convolution reverb?

As with all things digital, “garbage in equals garbage out” and the quality of convolution reverb depends on the quality of the impulse. There are several ways to capture an impulse, but the most common is to “excite” a space with an impulse like a starter pistol, balloon pop, a loud click through a speaker system, etc.

Another option, the sine sweep method, excites an acoustic space by sweeping a sine wave across the frequency spectrum. While this gives a very accurate model of the space, the resulting recording requires deconvolution to convert the sweep into an impulse.

Yet another method is to approximate a space’s characteristic with noise that’s “shaped” to create a decay. Pink noise is well-suited for this application, and you can use high-cut filtering to reduce highs over time to simulate damping. Using different noise samples in the left and right channels gives a stereo effect, and this method makes it very easy to create exceptionally long reverbs—just let the sample go for as long as you want. It’s also easy to generate “backwards” reverb impulses by using DSP to “reverse” your room. Although impulses created through noise may not have as much “character” as physical rooms, their consistency has its own merits.

Another advantage of convolution reverbs is that you can re-create the response of a hardware reverb unit by sending an impulse through the reverb and recording its output (although the irony of using state-of-the-art convolution technology to capture the mojo of a funky 12-bit digital reverb isn’t lost on me!).

Looking into the future, convolution can do a whole lot more than reverb. Applying something like a guitar chord as an impulse to a tambourine loop can give an angelic, melodic quality to the percussion. It’s also possible to grab an impulse of, for example, an acoustic guitar body and add it to an electric guitar recording. But perhaps one of the most interesting aspects of convolution is its unpredictability. Of course, you know that convolving a room impulse with a signal will add reverb, but convolving synth sounds, loops, and electric instruments can produce anything from distorted train wrecks to interesting sounds you’ve never heard before.

We can also expect more use of algorithms that can tweak the sound, as happens with conventional synthetic reverb. Early convolution reverbs didn’t give you a whole lot of parameters to play with, but companies like Audio Ease and Waves kept adding more and more versatility. Just as samplers use synthesizer processing to alter a fundamentally “freeze-dried” sound, convolution reverbs can alter pre-delay, decay time, EQ, and more.

With the downsizing of today’s studios, it’s getting harder to find good acoustic spaces for recording. But with convolution, a quality room impulse can help simulate that effect very convincingly—and based on digital’s history, we can expect convolution-based processes to become better, faster and less expensive in the years ahead.

Author/musician Craig Anderton has given lectures on technology and the arts in 38 states, 10 countries, and 3 languages. Check out some of his music at