Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now


Dare to Compare

Back in January 2006, I routed one microphone to six preamps for an unconventional series of tests dubbed Not a Mic Preamp Shootout. I was willing to

Back in January 2006, I routed one microphone to six preamps for an unconventional series of tests dubbed “Not a Mic Preamp Shootout.” I was willing to accept one compromise: The unique relationship between the mic and the preamp front end would be neutralized, but this setup allowed any one preamp to be subtracted from another to hear the remaining topological differences.

Although I spent the bulk of the time recording drums — which are good for evaluating overload characteristics — I planned future tests to focus on voice, which is better for discerning amplifier and capsule nuances.

Earlier this year, I was asked to compare analog hardware with software signal processing. This time, local engineers were invited to contribute raw and processed samples. These sessions were called “Dare 2 Compare.” I not only analyzed their files, but attempted to re-create their tests using generic and hardware-specific plug-ins. Digidesign Pro Tools, Soundscape and Adobe Audition were my “analysis tools.”

As the initial samples came in, I encountered an analysis problem: Everyone dismissed the software because they felt it wasn’t even close to delivering the sound provided by the hardware. This was not initially the fault of the software, but was rather due to the hardware’s inconsistencies.

A perfect example is the Neve 1064/1066/1073 preamp/EQ modules. There’s no way that a pair of vintage or retro-modern analog hardware equalizers can be made to agree simply by putting their knobs in the same place. So don’t expect “identical knob settings” on software to match the hardware.

For example, consider someone who’s applying analog EQ to a stereo track. The first step is to artistically approximate the EQ, but if you fed pink noise to both EQ units, reversed the polarity of one and summed, you’d notice that the two equalizers are nowhere near being matched, which is an opportunity to tweak the EQ bands on each channel for the best null.

The fact that few engineers take this extra step quite literally adds dimension to the stereo track because ballparked (unmatched) dual-mono EQ settings introduce phase shift that make the stereo image “wider,” which is one of the inverse complaints people have with digital — it doesn’t exhibit the dimensionality of analog — even when comparing converters.

I compared some well-known hardware (API, dbx, Neve, Universal Audio/UREI and Alan Smart) and software (Digidesign Smack!, Bomb Factory 1176, and URS’ Neve and API plugs), along with some generic Soundscape and Pro Tools signal processors.


Sonic comparisons always start out the same way: Establish a repeatable “procedure” that ensures a “level” playing field. Getting the procedure right will consume a considerable amount of time, sometimes more than the actual tests. The first step is to optimize the gain structure for headroom and noise, followed by matching the signal levels of all the gear being evaluated. The latter can be as simple as routing an oscillator to the devices under test, measuring the outputs with a precision meter and trimming the level to achieve a match that is hopefully within 0.1 dB and does not exceed 0.25 dB.

Another way to confirm the level match is to subtract one device from another by reversing the polarity in one of the signal paths and then “summing,” which is, in reality, subtraction. A full 180-degree shift constitutes polarity inversion, so here we’re using our ears as distortion analyzers. There will be subtle phase issues caused by the number of gain stages and the coupling capacitors between them. Each capacitor in the chain can cause small amounts of phase shift. Discrete and IC op amp circuits — from API to Avalon, Crane Song to Grace and even Mackie — have less phase shift than old-school discrete circuits like models from Great River, Neve and Telefunken.

When one group is subtracted from another, what often remains after the levels have been trimmed for the best null are spectral extremes — low bass and high treble that are not exactly in phase, along with harmonics that are in addition to the original signal. Now add EQ, dynamics processing and “digital” to the mix, and all of a sudden you’ve got to keep track of intended phase shift (EQ does this), attack and release parameters, and latency.


In addition to level matching, it’s also necessary to confirm time-alignment. Playing a reference file from the workstation through a D/A converter and then looping back through the A/D converter for recapture will have a few samples of delay as determined mostly by the sample rate. All digital processes — the obvious, such as EQ or dynamics, and less obvious, such as routing — create additional delays. While all workstations should include delay compensation, individual delays were inserted into each channel to allow null confirmation.

Latency is the digital equivalent of phase shift, and even if your software is supposed to keep track of such things, you still need confirmation. For the above example, an obvious delay must be applied to sync the reference file relative to the captured file. I also did this when comparing analog-processed tracks to their digital counterparts. The null point was very obvious with the tests (44.1/24); higher sample-rate yields would provide smaller sample-delay steps.


One of the first audio samples was a stereo drum submix EQ’d through a pair of Neve 1064 modules. I imported the raw and EQ’d tracks into Soundscape and, using its generic stereo EQ plug-in, noticed right away that the left and right channel’s null points were not the same for what are now obvious reasons. A digital stereo EQ plug-in is “perfect” in terms of channel matching, but in this case, separate left and right equalizers were opened so that each channel null could be optimized. Interestingly, it took two Soundscape EQ instances per channel to match the Neve 1064. (See the graphic on page 98.)

At the moment, I can only speculate as to why it took two EQ instances to equal the 1064. Here are my two theories: Digital equalizers are based on “ideal” components and analog components operate most decidedly in the real world. Perhaps doubling the sample rate might have allowed more resolution. Either way, once a good null was achieved, the actual sonic comparison was pretty impressive. When listening to only the null, just a few of the snare hits popped through as they saturated the Neve’s amps.


With dynamics processors, the intended signal processing can be fairly easily approximated. However, the matching process requires a much deeper level of patience as there are so many different parameters — ratio, threshold, knee, attack and release — all of which are interactive. Here, again, unless an analog device has stepped switches (stuffed with precision 1-percent resistors) instead of pots, no two will match each other, let alone a piece of software.

Two hardware versions of the 1176 were compared — a Silver UREI and Universal Audio black-face reissues, along with Bomb Factory and generic plugs. No one thought to attempt a hardware null; it was late and we were on borrowed time. Needless to say, they weren’t even close. However, once under null scrutiny, the completely different distortion characteristics of each box were obvious.

Comparing equalizers and compressor/limiters — hard and soft — will, at minimum, enhance your tweaking skills. You can expect a lot of dialing and tweaking, but when that null starts to happen, it’s like a videogame, opening you up to a hidden world of distortion artifacts.


I went into this experiment with one bias and one expectation. I find the emphasis on replicating the graphic “skin” of vintage gear either distracting or a bit over the top. What I came to learn is that the graphics are easily accomplished and that our “desire” for signal processing on every channel puts a severe emphasis on making plug-ins efficient.

At some point, however, I am hoping that software emulation will evolve to the point where individual components and stages will be isolated — give me “just” the input attenuator and “ouncer” transformer of an 1176, for example. This would be proof to me that the ghosts in the machine are written into the code.

Eddie would like to thank Tom Tucker (for inspiring this experiment), Tom Garneau, Adam Krinsky, Colt Leeb, Steve Hodge, Peter Bregman, Dusty Miller, Colin McArdell, David Hedding (for supplying samples) and Jason Orris (for supplying hardware). Want more? Visitwww.tangible-technology.comand click on the D2C link for samples.