Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Toe to TOE

This month, I'm going to revisit a technology that I think will eventually replace Fibre Channel-based networks and save us all money in the deal! Now,

This month, I’m going to revisit a technology that I think will eventually replace Fibre Channel-based networks and save us all money in the deal! Now, disk storage is something most of us need in this digital world, and networked storage is the way to go if you have more than one computer in your place. Imagine working on a project that you have to move files from one workstation to another. Rather than waiting for a file copy from one machine’s drive to another to finish or physically sneakernetting the drive, you can hang the drives themselves on your network. So, rather than working off direct-attached drives, you can make your hefty investment in disks available on the network to your whole place, all without a huge cash outlay. Less time twiddling your thumbs, more time getting stuff done.

Cast your mind back to September 2001, when I last talked about iSCSI, the scheme that allows SCSI commands to travel via IP protocols. Sixteen months have passed, and vendors are beginning to provide board-level products that fill some of the gaps in the needed equipment roster. One specific item that most every installation requires is an HBA, or host bus adapter. HBAs are hardware devices — usually PCI boards — that provide an interface between the local host bus and some communication standard. A good example would be a $30 network interface card (NIC) that you plug into your CPU to add Ethernet ports. The reason Ethernet HBAs are so cheap these days is because they provide the minimum amount of hardware to get the job done. What it doesn’t say on the box is that there’s absolutely no smarts to increase efficiency. Indeed, a server burdened with a heavy amount of IP traffic will find most of its CPU cycles taken up when processing those network packets — one of the fundamental problems of IP storage.

You see, if one of your computers is busy digesting a flurry of network traffic, it can hardly be called upon to pay sufficient attention to your host-based application that is trying to record an overdub in the foreground. Remember, I said you could configure your disks on different machines to appear on the network for everyone to use. Easy to do in either Win or Mac, but when you try to record to that network “volume” or disk, you may find the data throughput really sucks, with dropouts, or worse, as a result. This is especially true if you’re doing higher-sample rate or multichannel work. Here’s the thing: Most Ethernet hardware isn’t up to the task of doing more than out-of-real-time transfers, like file copying and Web surfing, and that’s where TOEs come in.

TOEs, or TCP offload engines, are chip-level hardware solutions that address the problem of interpreting the TCP/IP stack in software. Whoa, a what stack? TCP/IP, the transport-control protocol/Internet protocol, is the language that computers use to communicate over the Internet. TCP, also used by Ethernet, is responsible for setting up and maintaining the end points for a network-data transaction, while IP routes and delivers the data once it’s arrived. The IEEE brewed up the complete scheme and decided that, rather than using a monolithic, all-inclusive approach to the complex task of communicating over a network, portions of the job would be given out to separate processes in a modular fashion. These processes are conceptualized as “layers” in a hierarchical “stack” that cooperatively get the job done.

Unfortunately, that job usually requires a good bit of heavy lifting on the CPU’s part. At the very least, data-packet headers have to be read to glean the destination address. So, enterprising companies have baked the brains of a TCP software processor (that stack I mentioned earlier) into silicon, where it can sweat the gory details at “wire speed” (see sidebar), while the host’s CPU runs wild and free, so to speak.

Earlier, I mentioned cost savings, of which I’ve identified several areas. First, skilled TDs (technical dweebs) are in short supply, but there are many more TDs who are fluent in TCP/IP than are knowledgeable about Fibre Channel, the de facto choice for networked storage. In addition, IP infrastructure, both hardware and services like metropolitan network connectivity, is inexpensive compared to FC, and IP networks are scalable without network interruption. All of these factors combined translate into lower overall support costs.

Fibre Channel will never be cheap, but if you’ve got the need and the bucks to feed that need, then FC slakes the thirst for high-performance networked storage. On the other hand, Ethernet and IP are scalable, universal technologies. Ethernet is a commodity technology these days, even at Gigabit speeds. So, building a storage network with switched 1000Base-T and iSCSI is way cheaper than with Fibre Channel. (By the way, the no-nonsense performance of Gigabit Ethernet provides darn good throughput when viewed against the highly tailored architecture of Fibre Channel.) This doesn’t mean, however, that never the twain shall meet. In an early proof-of-performance demo, a server with an Alacritech Gigabit Ethernet HBA was connected to a Nishan IP storage switch via a single Gigabit Ethernet link. The Nishan switch was connected, in turn, to a Hitachi Freedom storage system, an enterprise-class FC product. The Alacritech accelerator maximized the sustained rate of iSCSI data at over 219 megabytes per second with less than 8% CPU utilization, while the Nishan switch provided wire-speed conversion from iSCSI to the Fibre Channel storage.

An important caveat: To many applications, different storage types are not equivalent. This has a great deal to do with the way that developers implement their applications. If an application makes “low-level calls,” whereby the software communicates directly with hardware (an internal ATA drive, for instance), then NAS and SAN become second-class citizens as far as that application is concerned. This programming method was sometimes required in the Stone Age when computers were slow. On the other hand, if an application communicates via appropriate abstractions provided by the operating system, then any storage supported by the OS should be equivalent. A modern, well-behaved DAW shouldn’t care what flavor of storage it’s using: DAS, NAS or SAN. This is especially true for host-based DAWs, because many hardware-based products haven’t quite caught up with state-of-the-art storage or networking technology. The upshot is, the more modern an application, the more likely it will seamlessly work with iSCSI storage.

A quick digression: DAS, or direct-attached storage, is the garden variety we all know and love, hardwired to a computer. The DAS label applies regardless of the attach method, whether it’s IDE/ATA, SCSI or FireWire. NAS, or network-attached storage, is storage hanging on a LAN, almost always using Ethernet and TCP, and can only provide file-level access. SANs (storage-area networks) almost always use Fibre Channel protocols and provide block-level access, letting a read or write request go into individual logical “blocks” on a disk that comprises part of a file. For more gory details, check “Bitstream” from May 2000 at www.seneschal.net/papers/bitstream, when I first got into the subjects of SAN and NAS.

Late last year, SNIA, the Storage Networking Industry Association, submitted the iSCSI spec to the IETF, the Internet Engineering Task Force, which should freeze dry it into a RCF, its version of a standard. Once the standard comes down, the vendors that are shipping product may have to adjust their firmware to accommodate any changes.

The first company to wade into iSCSI waters, Alacritech, has been shipping a variety of TOE-equipped, 100- and 1000Base-T HBAs and is still the leader. Alacritech was started in 1997 by industry visionary and groovy guy Larry Boucher, who serves as president and CEO. In a prior life, he was founder and CEO of Adaptec. Before that, he was director of design services at Shugart Associates, where he conceived the idea of the SCSI interface and authored its initial spec.

Strangely enough, Adaptec has also been prepping product, and Intel has the PRO/1000 T, a transitional HBA that substitutes software running on a general-purpose processor for a hardwired TOE. While the PRO/1000 allows skeptics to experiment on the cheap, it doesn’t have the wherewithal to do the job in a production environment.

So, will iSCSI be the savior of dweeb- kind? As if, but it will lead to a blurring of network and storage functions, all the while contributing to that seemingly inevitable decline in computing costs we’ve all come to expect.

This column was written while under the influence of Charlie Mingus’ exuberant “Moanin,’” which was recorded by the late, great Tom Dowd. His exceptional talent and amicable demeanor will be sorely missed.

PEDANT IN A BOX

Wire speed: This month’s jargon, “wire speed,” means that a process or algorithm runs très rapidemente, very fast. This is implied to also mean that it is, in fact, running in a hardware implementation, with the process designed into a chip-level device rather than some general-purpose CPU, DSP or FPLA doing the job in software.

A central processing unit (CPU) is the brains inside most computer-based devices. CPUs come in two basic varieties: CISC, or complex instruction-set computers, are old-school, general-purpose devices that are broadly capable in a brute force way, sort of like a Chevy Camaro. The other approach to CPU design, RISC, or reduced instruction-set computers, are only capable of a streamlined number of tasks, but they perform those select tasks with great alacrity. This is akin to BMW’s Mini against that Camaro. Intel and AMD make CISC CPUs typically clocked close to 2 GHz, while Motorola, Sun and IBM make more efficient RISC CPUs clocked at around 1 GHz.

Digital signal processors take RISC one step further and limit their computational skills to only those used to transform a digitized signal, whether audio, video, RADAR, whatever. National Semiconductor’s SHARC, Texas Instrument’s TMS320 and Motorola’s 56k families are all DSPs.

Field-programmable logic arrays and their brother FPGAs are chips that are so general purpose that they have no personality at all. FPLAs are also chip-level hardware collections of logic functions that can be electronically wired together in almost any combination, all in an instant. FPLAs are used to provide hardware versatility when a designer doesn’t want to commit to a specific chip or some esoteric function cannot be realized with an off-the-shelf part. Xilinx and Altera are two FPLA vendors whose products show up all the time in digital audio gear.
OMas

Close