Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now


The Audio Pipeline


When it comes to audio technology, pros are often so consumed with learning what they need to know for practical purposes — keeping a session moving smoothly — that they never really dig deep and understand the less-tangible aspects of what makes their gear tick. This is even more true with computers, especially in that grayest area of computing: data transfer. We are aware of the many different standards, taking for granted reports that some are better than others in high-performance environments. But truly understanding the various options for moving audio — essentially data — in and out of a DAW is vital to get the most out of a system.

To learn more about what’s happening under the hood, we’re taking a theoretical look at the ins and outs of today’s DAW environment by examining the various data-transfer protocols. Taking a down-the-wire perspective, we’ll examine the entire data flow chain — from the back of the A/D converter to the computer to a storage drive and all the way back again to the final D/A stage. (This is not to be confused with front-side or patching protocols such as S/PDIF or AES/EBU digital audio signals, which are not relevant to this article.)

We’ll be hitting on terms and protocols in one of two different contexts: as a method of transfer to and from a data storage device and as an audio interface connection protocol. Although there may be many possible routes and handovers in an elaborate DAW setup that comprises many storage devices and audio interfaces, it’s important to realize that there are just two types of data transfers: file-based storage and retrieval between drives, and real-time transfer between interfaces. Differentiating between each protocol’s benefits, even though some may be used for both purposes, is crucial to designing a system for optimum performance and can lead to better track counts, lowered system overhead and more.

Dating back to 1979 and a company called Shugart Associates (presently Seagate in Scotts Valley, Calif.), the Small Computer System Interface, or SCSI, is the oldest and most evolved data drive — interfacing protocol still in use. Like the protocols that followed, SCSI went through numerous revisions and performance enhancements. From the original (and now retired) SCSI-1 standard of pushing 5 MB/sec down a “narrow” 8-bit path to the doubled bandwidth and strengthened reliability of SCSI-2 in 1985, 20MB/sec SCSI was the only game in town for nearly a decade. In the early 1990s, the announcement of SCSI-3 saw a collection of standards that would spawn three of the most common SCSI revisions seen today: Ultra2-Wide, Ultra160 and Ultra320. Ultra2-Wide, despite its inappropriate name, is a subset of SCSI-3 and operates on a 2-byte — wide data path at a speed of 40 MHz for a maximum bandwidth of 80 MB/sec. Ultra160 uses double-clocking to achieve 160 MB/sec, while Ultra320 doubles the clock rate to 80 MHz for a maximum speed of 320 MB/sec.

Other data drive protocols would come along to give SCSI a run for its money. Parallel ATA, or PATA (also known as IDE), gained its popularity in the late 1980s and early 1990s as an economical drive format for the average PC. However, the original standard was not well-suited to support the growing size and performance needs of a newer breed of hard disks. Several companies jumped at the chance to reinvent ATA with faster transfer rates and enhanced features. Seagate broke out of the gates with Fast ATA and, soon after, Fast ATA-2. Meanwhile, Western Digital developed the Enhanced IDE (EIDE), a somewhat different ATA feature set expansion supporting higher-speed transfer modes, non-hard-disk ATAPI devices (CD-ROM drives, etc.) and dual IDE/ATA channels. Having undergone numerous revisions during the years (including UltraATA), the underlying parallel standard finally reached the limits of its capabilities.

To address these limitations, the new Serial ATA (SATA) interface was developed in 2001. Ridding the 40-pin and 68-pin ribbon cables of a parallel connection, SATA’s single-conductor approach greatly reduces electrical noise interference, allowing for much higher clock speeds. Whereas UltraATA/133 (the pinnacle of ATA’s development) allows for burst rate data transfers of up to 133 Mbps, SATA starts at 150 Mbps. There are working plans to ramp that speed up to 300 Mbps with a SATA II standard and eventually 600 Mbps in a third generation.

Similarly, Serial Attached SCSI (SAS) moves data in a single stream and does so much faster than parallel technology because it is not tied to a particular clock speed. Serial technology wraps many bits of data into packets and then transfers the packets at a much higher speed than parallel (up to 30 times faster) down the wire to or from the host.

Up to this point, these protocols are only for data drive devices. As the world has moved further into the digital lifestyle, manufacturers looked for ways to connect other data-intense devices and peripherals to computers. Enter USB, which promised a way to connect everything from slow and mid-speed peripherals such as keyboards, mice and digital cameras to the PC. In its initial form, USB 1.1, the main attractions were relatively low cost and “hot-pluggability,” but its top-end speed of 12 Mbps was far too slow for most multimedia streaming — let alone professional audio — and completely useless for interfacing hard drives. The next major release was revision 2, which included a new optional high-speed mode; in it, USB 2 devices designed to take advantage of the mode could reach a defined maximum of 480 Mbps. For a variety of reasons, USB 2 never really lit up the pro audio environment.

We should humbly admit that our beloved audio industry is probably near the bottom of the pecking order when researchers dream up new networking and data-transfer technologies for the market. We’re often left tapping into consumer product technologies, a much more financially rewarding reservoir for frontier technology developers and manufacturers. One of the key technologies fueling this drive has been the serial I/O standard IEEE 1394 — also known as FireWire (Apple), iLink (Sony) and DV (digital video; used on video camcorders).

FireWire — where’d we be without it? Posing as the be-all, end-all of our data-transferring needs, it’s been hyped to death at the commercial level and heavily adopted by the pro audio community. Nearly a decade ago, FireWire-A (FW-400) hit the spot with a maximum raw data rate of 400 Mbps (50 MB/sec); it was hot-pluggable and dozens of devices could exist in a network. In 2003, FireWire-B (FW-800) hit the streets with blistering data rates set incrementally at 800 Mbps (100 MB/sec) and 1,600 Mbps/1.6 Gbps (200 MB/sec) — with architectural support for a staggering 3,200 Mbps/13.2 Gbps (400 MB/sec) in the future. Not bad for a “consumer” protocol. (For a quick pipeline spec comparison, please see the chart on page 72.)

Drive interfaces are plentiful and finding the best option often puzzles DAW owners. With most computer systems natively supporting ATA/IDE internally and FireWire-A externally, this combo seems to present a predetermined solution — though SATA and SCSI proponents would have you believe that switching out for one of these formats is the way to go.

Due to its low cost per gigabyte, SATA will continue as the prevalent disk interface technology in desktop PCs, sub-entry servers and networked storage systems where cost is a primary concern. In fact, the industry is replacing parallel ATA with SATA, and many years may pass before we’ll get to realize its speed potential. At present, we’re unable to fully tap into SATA’s top-end speed on a single-drive system as current SATA drive technologies are limited to about 60- to 65MB/sec maximum sustained throughput.

Meanwhile, FireWire is entering its second-generation speed bump, though the 400Mbps FireWire-A remains the de facto standard. The interface’s major selling points are convenience and flexibility in network design. With FireWire-A allowing cabling distances up to 4.5 meters in length and FireWire-B supporting distances upward of 100 meters, multiroom facilities can share drives in ways previously not possible in this price range. For the multi-operator facility, all FireWire versions offer support for up to 63 devices to be connected via a single bus, offering peer-to-peer connectivity and enabling multiple computers and FireWire devices to be connected simultaneously.

“One major advantage that FireWire has is an isochronous transfer mode,” says Dave Anderson, Seagate’s director of strategic planning. “This can deliver data in a more deterministic flow. A higher-data-rate device may sustain a greater average throughput, but with a media application like audio or video, it is equally important that the data arrive in a predictable manner and not in bursts.”

Of the storage interfaces used today, only 1394/FireWire has this property. If you use an interface without an isochronous mode, then your system must buffer greater amounts of data to smooth out the flow between possible bursts. Even so, the interface far outperforms the drive technology currently available for it. As in the SATA scenario, current drives pin 1394’s 100MB/sec maximum throughput down to about 60 MB/sec.

Despite its age, SCSI has held in there for the long haul. Still the priciest of the interfacing technologies, it continues to reign dominant in power-user systems where speed and reliability are foremost. “There’s no question that SCSI is the most reliable, though not because of the interface,” says Anderson. “Drives designed for server and enterprise applications meet reliability and performance criteria far above those for desktop applications. If reliability is key — and especially if the workload is going to be heavy — SCSI would be the best choice.”

Its prowess for reaching top-end speeds of 320 MB/sec comes from the fact that modern SCSI operates on what’s known as low-voltage differential (LVD) as opposed to so-called “single-ended” systems such as ATA/IDE. Essentially, single-ended systems are analogous to unbalanced audio cables and are susceptible to picking up noise in the same way. On the other hand, differential signal paths act much like balanced audio and are more immune to noise and high-frequency loss, allowing for higher data rates and longer cable lengths.

SCSI-based systems do have their drawbacks. Unlike FireWire devices, which are plug-and-play, SCSI requires that each device be assigned its own ID and can’t typically be hot-swapped without restarting your computer. (This is more an OS issue and not directly SCSI’s fault.) Also, SCSI drives have a maximum storage capacity of 146 GB, making their cost-per-GB soar compared to the 400 GB and more that you can get per ATA/SATA drive.

On the flipside, FireWire is a more complex protocol, so it places a greater load on the CPU to unpack the data as it arrives. “This means that performance is more likely to vary with changing versions of the FireWire driver, or even changing versions of the OS if the OS provides part of this support,” says David Gibbons, senior director of product marketing at Digidesign. “Notwithstanding these drawbacks, we’ve found the performances of FireWire drives to be excellent and qualify our highest track-count numbers with FireWire and SCSI.”

Gibbons adds that considering the most common interfaces used today, his people aren’t seeing differences in overall performance that are strongly related to the interface’s “nominal speed.” “We are seeing some differences, but they have to do with other factors, such as driver optimization, rotational speed, seek time, OS version, packet overhead, bus contention, et cetera,” he says.

In the leg data transfer from the audio interface to the computer, pro DAW users only have a choice between FireWire and PCI adapter card — based solutions. Depending on the data storage interface of choice, system overhead and data bandwidth issues may dictate one over the other.

First, let’s take a look at the increasingly popular all-FireWire solution. If you consider using a host system that has a single FW-400 bus (an adapter card or motherboard-based) with a single FireWire audio hard drive, plus one, two or more FireWire audio interfaces connected, then each of these devices must share the highway. Adding more drives or increasing I/O will add potential traffic.

Computers generally have a single FireWire bus. Adding a FireWire adapter gives you an entirely new bus to work with, so your internal FW-400 bus and a FW-400 adapter card on the PCI bus will give you 2×400 Mbps in bandwidth. Likewise, higher-bandwidth buses and adapters work similarly (i.e., 2×800 Mbps or 800 Mbps+400 Mbps). The PCI bandwidth in today’s computers is more than enough to support multiple FW-800 buses. It is important to keep in mind that the bandwidth is only one part of the equation.

“Say you have a three-lane highway with a speed limit of 65 mph and the average car is 13 feet long,” surmises Max Gutnik, director of sales at Apogee Digital, sitting alongside his engineering partner, Kevin Vanwulpen. “If we draw a line across the highway, about 1,320 cars per minute can cross that line. In practice, there needs to be some room between cars, and then there is the issue of cars getting on and off the highway and needing to change lanes.”

The data-handling protocols will determine how this happens. Obviously, having a highway that barely fits the amount of traffic just won’t do the job; you need higher bandwidth to handle the variances that will occur or you will experience gridlock.

“In the case of PCI in a PC or Mac,” Gutnik continues, “it’s important to know that almost everything in the system uses that bus — your audio cards, controller chips for your hard drives, maybe your graphics cards and so forth.” The more headroom you have on this bus, the better the performance. Even then, there are variances in bus speeds and chipsets that will determine the system’s capabilities.

Still, PCI is much “closer” to the CPU and benefits from its direct integration to the motherboard and high speed. Although it’s typically running at 33 or 66 MHz, PCI is a 32-bit or 64-bit — wide bus! It’s also possible to move audio data through it with fewer stages of buffering than FireWire or USB.

“This difference makes low-latency interface design easy and ensures that PCI devices can transfer their data whenever it’s critical for continuity of the audio stream,” says Gibbons. “You won’t get clicks or pops or interruptions, even when transferring lots of data in short time periods. Doing this with live audio [from interfaces and converters] is a different challenge to working with hard drives where you can use lots of RAM to buffer up the playback or record data.”

Digidesign is credited with taking this logic to extreme with its proprietary DigiLink interconnect between the company’s Pro Tools|HD core cards and external audio interfaces. Gibbons was quick to point out that DigiLink is not based on FireWire or any other common computer protocol and, combined with other favorable design factors, allows Pro Tools hardware to pass audio from input to output without intervention from the CPU within the time period of just a few samples. This compares to a minimum of 32 samples and often more with host- or FireWire-based systems.

However, there are tradeoffs. “PCI can’t be easily extended outside of the computer motherboard, forcing you to place your highly sensitive audio processing electronics in the electronically harsh environment inside the computer,” notes Gibbons. “And, of course, you lose the ability to hot-plug.”

Theory is fine, but what’s the magical combination of speed and efficiency to create an ideal data-transfer interface? There isn’t one. The diversity of users’ needs often dictates system interfacing choices more than the underlying technological efficiencies.

The reason many prefer SCSI for their disk interfaces and PCI audio adapter cards over FireWire likely stems from the fact that SCSI is closed: It’s used only for drives and disc burners. FireWire, on the other hand, is open to a wide variance of peripherals and devices and is therefore subject to dramatic fluctuations in bandwidth. If FireWire is used in a consistent and limited environment where the bandwidth is adequate, it will be just as reliable as SCSI, as long as peripherals are not added or removed from the chain. “In general, the FireWire bus cannot provide lower latency than the PCI or PCI-e bus,” says Gutnik. “Consequently, PCI will always be a faster protocol than FireWire.” Implementing RAID (Redundant Array of Independent Disks) configurations has become a favored solution for tapping into the unused bandwidth of SCSI and ATA using multiple drives.

I haven’t brought up Fibre Channel in these comparisons, and I’m sure I hear one or two of you screaming because I left it out. Though it isn’t too expensive from a drive perspective, Fibre Channel simply isn’t a protocol but an interconnect technology ideally suited for highly reliable SAN topologies. Offering up to 2Gbps serial data transfer architecture using SCSI, IP and other protocols — unless the application calls for multiple users sharing access to a common library of data (common in video post and broadcast) — Fibre Channel can seem far more complex than is necessary.


Click here for table of specifications

These protocols are always evolving. FireWire has two major speed revisions ahead, including the prospects of new encoding/compression schemes and simultaneous transmit and receive. Meanwhile, SATA and SAS are still in their infancy, and the move to 64-bit computing shows promise for a whole world of “better, faster” protocols.

“Interface wise, the audio industry has produced a few new standards for interconnection including SuperMAC/HyperMAC from Sony, and in the world of live sound, Ethersound and CobraNet,” notes Gibbons. “Some of these may be applicable to studio production situations, too.”

On the storage front, there is a trend toward providing network-attached storage (NAS), which means that drives simply plug into your Ethernet hub without a server. Gibbons points out that although these drives are currently more expensive than their FireWire counterparts, the arrival of Gigabit Ethernet makes this an interesting direction. Similarly, Flash drives, which have previously been hamstrung by the length of rewrite cycles, are steadily increasing in performance. “I have high hopes for very compact, high-capacity, high-track-count [Flash] devices in the near future that are applicable for pro audio,” Gibbons forecasts.

Gutnik anticipates several technologies being there in parallel, citing the most potential for FireWire in the near term. He notes that of the many different types of FireWire implementations currently on the market, proprietary systems with custom drivers dominate. Apogee, he says, is committed to working with the industry to build “class-compliant” FireWire solutions, meaning that they are designed to adhere to an industry-standard protocol for FireWire-based audio only.

“A class-compliant standardized protocol will have huge benefits for everyone in the industry,” claims Gutnik. “Just imagine if MIDI used a different protocol for each manufacturer. Only devices made by that particular manufacturer would work together. This is how FireWire is today. You cannot, for example, connect a Digidesign 002 to a MOTU 2408 and a Rosetta 800 via FireWire. But once the class-compliant route is adopted, everyone will adhere to a single specification and FireWire will be as universal as MIDI is today.”

Jason Scott Alexander is an Ottowa, Canada — based A&R executive, producer, remixer and freelance writer specializing in music technology, convergent media and entertainment technology law.

USB 2.0 and FireWire battle it out. Click here to read the white paper.

For more in-depth coverage on causes of latency and how to reduce it to musically acceptable levels, click here.