On page 18 in the section “Mix Looks Back” [“Current”], you have Debby Boone's “You Light Up My Life” as having no studio information available. I was the engineer, and the studio was A&R Recording, Studio A1, in New York. Refer to pages 221 and 232 of Mix, August 1998, for more details.
Incidentally, I think Geoff Emerick must have worked at a different Abbey Road from mine. One time, well before 1962, a setup engineer refused to install a limiter I requested, pointing out that there was a memo forbidding the use of limiters on a session! I took him up to the manager, who, in no uncertain terms, informed him that the engineer in charge of the session is to be given anything necessary to get the sound he wants. That memo was canceled forthwith!
That was the only time in my entire tenure at Abbey Road — 1958 to 1968 — that anybody told me what I could not do. And that included very close-miking to drums and anything else I thought necessary to get the sound I wanted.
New York City
LATENCY IS AS LATENCY DOES
Ned Mann's article on DAWs (“The Great DAW Challenge,” October 2002) contains some significant errors when he touches on latency in “native” systems. He writes: “…fantasy than a reality when it comes to large files. The song that starts out with 16 tracks and a somewhat livable 3ms delay setting will die an ugly death when it reaches 64 tracks of audio, 128 plug-ins and 16 virtual instruments. In order to record a session with this file, all of the tracks would have to be bounced to disk.”
I can't argue with Ned's empirical observations here, but this is not about latency; it's about system load. Latency is a function of the audio-interface hardware, the audio-interface device driver, the operating system scheduler and the application software. Any properly written DAW will exhibit “all-or-nothing” behavior with respect to latency: It can either meet the current latency setting or it cannot. When it can't, nothing can be done except reduce the load on the system. If Ned has a system whose behavior with respect to latency degrades in a somewhat linear fashion as the load increases, he should consider using a different DAW.
Ned also writes: “This is not the DAW dream. Although many interfaces feature ‘no latency’ inputs, these are generally limited to a stereo pair, with the live inputs combined with the stereo mix from the sequencer and fed directly to the interface's output.”
This simply isn't true of any of the audio interfaces one would seriously consider for pro work. The RME Hammerfall Series, for example, do not suffer from the limitations that Ned describes, and neither do any of the many interfaces based on the ICE1712 chipset. If you are interested in pro work, then the first thing to be sure of is that you are using an audio interface that is properly designed for such things, and most of the interfaces covered in Mix and other magazines are not: They are aimed at simultaneous recording of stereo pairs and not much else, even when they do provide multichannel I/O.
He goes on: “…necessary for the pro user. For the moment, at least, Digidesign dominates this approach. In addition to zero latency on input, Digi hardware also provides confidence monitoring, extensive DSP power, high track counts and extended sample rates. It also offers compatibility with most major studios, in addition to the highest reliability.”
True compatibility comes from protocols and connectors, which is why things like S/PDIF and ADAT are so important. I would also like to point out that Digidesign hardware and software are entirely proprietary. If Digidesign ever folds — it may seem unlikely at this point in time, but corporate America has witnessed bigger surprises — your investment in this technology will have reached a dead end. Many companies that make audio hardware have cooperated with free software developers so that the information needed to interact with and control the hardware is public and visible. Even if these companies go bankrupt or switch their focus away from current products, it will be possible to use their hardware.
Linux Audio Systems
Paul Davis has some important misunderstandings of the issues in my article. The total system load (i.e., plug-ins, native synths) that a given DAW can handle is determined by a number of factors. But the most important factor in deciding what a given system is capable of is clearly the sample buffer for the hardware driver.
This sample buffer governs the size of the data packet that the CPU will process. A smaller buffer fills up more quickly, resulting in less monitoring delay on incoming signals (those being passed through the A/D converters, processed by the CPU and routed back out the D/A converters). However, processing data in smaller packets places a higher load on the CPU, which results in reduced power for plug-ins. Therefore, in order to record live musicians, low sample-buffer settings are used, whereas high settings are used for mixing.
My example showed that while files start small (and allow for small sample-buffer settings), as the file grows, it is very possible to begin to receive error messages from all native-based DAWs, regardless of how fast the CPU is. Switching to a higher buffer setting is fine if you have “printed” all of your tracks to disk. However, if you are triggering live MIDI tracks in the mix, these will all have to be re-adjusted to compensate for the delay. A last-minute guitar OD can pose real problems for studios running maxed-out DAWs because they can't reduce the sample buffer to an acceptable level. In contrast, Digidesign's HD and TDM systems avoid this by using DSP power (not the system CPU) and can monitor 128 tracks with virtually no delay — all with EQ and compression. Enough said.
Paul correctly points out that there are interfaces that use DSP acceleration to provide “low-latency” settings (such as the Hammerfall and the new MOTU PCI-424 card), which they achieve by keeping the incoming audio off of the system bus completely. Importantly, MOTU's control panel for the just released PCI-424 card enables DAWs to combine two types of tracks: those with low latency (i.e., a singer) and with high latency (i.e., a native reverb) at the same time. This will eliminate the need for an external mixer and reverb when tracking. Users will also be able to run large systems more efficiently and with higher sample buffers, providing more “power.”
While this is an important step in the progression of native systems, one must bear in mind that these nonlatency tracks do not hit the system bus and therefore cannot use native EQ, compression or other plug-ins. If you have a Neve handy, that's not a big deal. But for most engineers tracking a live date, not having EQ or compression is huge. Despite the gains made in monitoring, DSP cards (such as Digi's HD systems, the TC Powercore or the UA Audio's powered plug-in card) still add power to native systems in very important and compelling ways.
— Ned Mann
Send Feedback to Mixmixeditorial@primediabusiness.com