Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Infini-What?

IMPROVING THE NETWORK BUS Ah, September. A time when latent color emerges from behind the green of summer. When the latest retro tube gear struts its

IMPROVING THE NETWORK BUSAh, September. A time when latent color emerges from behind the green of summer. When the latest retro tube gear struts its stuff at AES, and a young man’s fancy turns to improving local bus architecture. InfiniBand, to be precise.

For those of you who rely on computers, speed is of the essence. Unfortunately, technology marches forward while the gear you have doesn’t magically morph into the latest model. It just sits there and feels slower every day. So when it’s time to upgrade, choose wisely, young Skywalker. Today’s CPUs are hampered by local bus limitations and even with PCI-X around the corner, compute and I/O-intensive processes, like real-time encoding, serving and streaming, require as much speed as possible. InfiniBand equals speed. Lots of it.

At the recent Applied Computing Conference and Expo in sunny Santa Clara, a stone’s throw away from one of my favorite roller coasters and IMAX theaters, I met with the InfiniBand Trade Association (ITA) to discuss their new religion. Comprising seven founding members, “the association is dedicated to developing a new common I/O specification to deliver a channel-based, switched fabric technology that the entire industry can adopt.” The ITA’s top-tier steering committee has signed up more than 140 implementer companies, all eager to move local bus science forward.

InfiniBand is a switch-fabric architecture, sort of like Fibre Channel. A switch-fabric architecture decouples I/O operations from memory by using channel-based point-to-point connections rather than the shared bus, load and store configuration of older technologies. The predicted benefits of the InfiniBand specification are improved performance and ease of use, lower latency, built-in security and better quality of service, or QoS. Sounds like a plan.

As clock rates spiral ever upward, serial communication eliminates the skew and other difficulties of getting a bunch of parallel signals to all work harmoniously. Designed specifically to address interserver I/O, InfiniBand’s physical implementation is a two-pair serial connection rather than PCI’s parallel approach. Basic “X1” links (see Glossary) operate at 2.5 Gigabits per second and can be aggregated into larger pipes or “link widths” of X4 or X12, equivalent to 10 or 30 Gbps. This results in usable, bi-directional bandwidth of 0.5, 2 and 6 GB/second, respectively. Feel the power!

Once you have a handful of servers and some peripheral devices, you’d want to hook ’em up. This is accomplished through a switch. InfiniBand switch architecture has been designed to accommodate more than 48,000 simultaneous connections, allowing complex meshes to be made. And since IPv6 addressing is used, there’s no lack of valid address space. These multiple, autonomous point-to- point links can span 17 meters via copper and over 100 meters on glass. Any link can be assigned to one of 16 “virtual lanes” or priority levels to fulfill QoS requirements. And, redundant parallel links can be established to ease availability worries. Multiple switches and subnets can be interconnected via routers, carrying both IP and InfiniBand traffic hither and yon.

When first rolled out, InfiniBand will ratchet up the speed at which servers communicate, making high-performance cluster configurations more practical. In the long term, this should make multiprocessors with greater than two CPUs less attractive. Once mature, we may also see InfiniBand appearing in consumer products to replace local buses of lesser prowess. Think commodity Intel boxes loaded with FreeBSD doing all your heavy lifting for the Web. High performance, combined with ease of installation, is tough to beat. There’s a good deal more work to be done, but IB-equipped uber-servers should appear next year.

Close