Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now


The Quest for Power

We all want more power. Admit it. When it comes to the gear we all lust after whether it's a Massenburg EQ or the hottest new plug-in it never seems to

We all want more power. Admit it. When it comes to the gear we all lust after — whether it’s a Massenburg EQ or the hottest new plug-in — it never seems to be enough. The same can be said for raw DSP power: As soon as we finish installing the latest, greatest CPU, software developers figure out something to overwhelm it.

The concept that a single computer will eventually run everything in your entire studio is slowly evolving into a growing awareness that the tasks will need to be divided up between several CPUs dedicated to specific tasks. Before you reach for the Advil, consider that most studios today are already dependent on multiple CPUs. From console automation to hardware synths to reverbs, studios are actually running dozens of CPUs — we just don’t think of it that way.

In my previous article (October 2002), I detailed some ins and outs of setting up a high-end DAW and the awesome power that can be harnessed on a single CPU. Correctly configured, the single CPU can handle most users’ needs. However, I also touched on some limitations (conflicts between different programs and the eventual taxing of even the fastest CPU) that users run into when trying to expand systems to illogical extremes.

For users who want unlimited power, native sound sources and instant recall, the multiple-CPU scenario is the current state of the art. Multi-CPU setups take many forms, but, in general, break down into two basic groups: users who want to isolate sampler responsibilities and those who want to integrate additional sequencer environments.

The first scenario (an additional CPU to handle virtual synths), though uncommon even a few years ago, is now becoming a de facto standard. GigaStudio has ushered in a world where 1-gig piano samples and libraries that stretch into the teraflops are now commonplace. Having vast libraries and hundreds of voices available in real time all but requires a dedicated CPU, as most serious composers have found out. While GigaStudio was the first to be able to stream samples directly from the hard drive, it is now being joined in the act by Steinberg’s Halion, Emagic’s ESX 24 and Native Instruments’ Kontakt. These soft samplers allow users to access the sought-after Giga libraries directly from within the sequencing environment, which begs the question: Why do I need to dedicate an entire computer to run GigaStudio? If you are running a small number of tracks and want to play a few Giga sounds, then the answer is: You don’t. However, consider for a moment what many large productions actually entail. Can we expect one CPU to play back 100-plus audio tracks — each with EQ, compression and effects — and stream 160 voices in real time? I think not.

Many companies are now filling this niche market with custom-built PCs preconfigured to run GigaStudio. Although slightly more expensive than doing it yourself, it takes all of the guesswork out of setting up and configuring a system and provides a true source of tech support. In fact, many film composers who were initially resistant to adding a GigaStudio PC now find that they can’t live without the concept of dedicated computer/dedicated task; some even broke down and installed three or four of them! With the advent of VST Shells (such as Steinberg’s new V-Stack), PCs can access 16 VST and DirectX plug-ins directly without a host-sequencer application. This will allow CPUs set up to run GigaStudio to double as dedicated soft-synth players, as well. Amazing!

On a simpler level, many users who are upgrading their existing G3 or Pentium III find that the computer that they are replacing can serve many uses. The most obvious would be setting it up as a dedicated Internet portal, FTP server or using it to digitize QuickTime videos. These older computers can also be used to run virtual synths and samplers. If the old CPU can be set up to get even a few instances of these and save yet another CPU from going to the junkyard (or better yet, recycling center!), it’s well worth it.

When I suggest that clients set up a second CPU, the response I usually get is, “Great! But where am I going to put yet another monitor?” This is easily solved by the addition of a monitorUSB switcher. The more important question is, “How do I connect them?” If the upgrade includes a new audio card, the answer is obvious: Run the audio output of your old computer into your new one. If the new computer will be receiving the old audio and MIDI interfaces, then there are numerous inexpensive options for the old computer. For the ever-growing Lightpipe connections, I use a patchbay (Apache by Frontier) to connect 24 channels of audio from each of my three computers and my ADATs, saving me major amounts of repatching time.

Synchronization is handled by a DTP (MOTU) or a Sync I/O (Digidesign), which is clocked from the master CPU. An additional benefit of this setup is that it allows transfers between any two sequencers on either Mac or PC platforms. You simply set the sample rate and the start time of the file and roll it over. This resolves a major headache for clients who need to get their mix — intact — into another sequencer. While the OMF-transfer protocol is much improved (and great for transferring raw audio), it still ignores such niceties as EQ, compression and effects busing. If you want to transfer a complete mix between sequencers, the best way to do it is still in real time. (See the sidebar “Case Study 1: The Pro Tools Connection.”)

“That’s all fine and good,” you say. “I know how to set up computers so they can talk to each other. What I really want to know is how a multi-CPU setup will help get clients in the door?”

Fair enough, here’s how I see it: There are tens of thousands of users running DAWs today. Granted, some setups are very basic, but many producers/engineers have been doing their homework and diligently studying their workflow. And a few are the very clients who used to work exclusively in larger studios but have now set up shop in their basements.

While they may love the freedom and power to work at home, most are top-level producers/engineers determined to take their projects to the next level, in particular by tracking and mixing in high-end studios. However, unlike five years ago, they expect to do more than just dump their tracks and start their mix over. They want the ability to interface with big rooms, but they also demand the flexibility to take their files home and continue producing them. These users represent a large untapped market for pro studios — if the pro studio is ready! (See the sidebar “Case Study 2: The Big-Studio Connection” on page 74.)

When I discuss adding a sidecar CPU running Logic, DP3 or Cubase to facilitate this demand, some high-end, digital-based studio owners look at me sideways. Maybe, it’s the fear of yet another CPU to deal with, or they worry about the need to train someone in the basics of these different programs. However, in a very real sense, software-based DAWS are the sequel to ADATs and Tascams, both of which required some basic knowledge to synchronize and maintain. I find it ironic that the same studios that added two digital 32-tracks, six DA-88s and five E4s 10 years ago now balk at adding a second CPU. My basic advice — to become compatible with today’s software/hardware formats — is predicated on the same logic as was their original investment in Tascams. It was partly to be productive and stay competitive, of course. But most important, it was to bring in clients.

Production houses can specify which formats their composers work in. But major studios are in the business of catering to all of their clients’ needs. Can they be certain that the next project won’t have composers working on disparate software formats, sending mix files over the Rocket Network? Or dance producers wondering where Acid and ReCycle are? Or what about tomorrow’s jingle date when the composer will need to bring up his Logic file to make some last-minute tweaks, add strings and submit a final mix in Pro Tools? It seems to me that studios need to spend a little more time helping clients get in and out of their doors — on all possible formats.

No doubt, multi-CPU setups require thought and care to operate. But imagine the kind of power and flexibility you achieve when you allow yourself to consider two or three CPUs as free-floating resources that can emulate entire orchestras, build synths, create reverbs and burn CDs. In reality, you probably end up with fewer “computers” in the room than you had two years ago.

Occasionally, as I wait 45 seconds for my CPUs to boot, I wonder if I’ve gone over the edge. But then, I remember (without much fondness) spending 15 minutes before each session aligning my 24-track and replacing caps on my analog console and loading samples. Was the old way ever really as easy as some people remember it, or are we all suffering from intentional amnesia? Maybe punching bass parts for two hours was in some ways inherently more musical than comping together a part in less than 20 minutes. While studio mavens debate this on the Web boards, I can guarantee which option the bass players would pick: The one that lets them sit in the lounge eating donuts!

[Eds.’ note: To learn more, read our Case Studies, below and on page 74.]

Ned Mann goes by the nom de plume of The Digital Doctor.


Saxophonist/producer David Mann (yes, he’s my brother) has been constantly reinventing his studio as his production needs change. Here’s where he stands now:

During the past 17 years, my personal studio has morphed many times — from a Portastudio to an 8-track to 2-inch 16-track, and finally to 24-track. I then joined the digital revolution with ADATs and various consoles, ranging from a Mackie 24×8 to a Yamaha 02R, with Logic Audio handling the sequencing. When I bought my Pro Tools Mix system in 1998, so many things came easy to me for the first time. As a composer, I favor Logic, where I can assemble songs quickly, use loops, create REX files, print out scores, etc. As a producer, I rely more on Pro Tools, with its stability; 64 tracks; great plug-in; and, most important, its compatibility with other musicians, engineers and studios.

I started off running both Logic and Pro Tools on the same computer, with Logic handling MIDI, software synths and loops and Pro Tools handling the audio and mixing. I had both programs work together as one by locking them up over the IAC bus. However, this was not a perfect solution for me, as I could not use audio loops in Logic (this was pre-ESB system bridge), not to mention that a crash in one program would take the other one down with it.

When I upgraded my computer (yet again!), it dawned on me that I could separate Logic and Pro Tools onto their own CPUs, moving Logic to the new computer, keeping Pro Tools on the older machine. We clocked them together, and they can be started and stopped via MMC from either transport. It’s seamless. It allows me to accept and deliver work in almost any format — a huge time-saver. It also provides unlimited amounts of power: I can use all the DSP power in my new G4 to run the virtual synths (such as ESX24s, Absynth, Stylus, etc.) and still have all the plug-ins and mixing power in Pro Tools! It’s truly amazing. I have added a third computer, a PC running GigaStudio and Acid.

Some people assume that it must be complicated using three computers to produce music. Actually, it’s much simpler and more elegant than my old setup used to be, when each of my old samplers used to require a hard drive, CD player, etc., and were almost impossible to back up. Now I load one file (albeit, on three separate computers) and I have my entire production recalled, complete with synths, samplers, EQs and automation. I can only wonder what my studio will look and feel like 17 years from now.


Producer/percussionist Randy Crafton is building Kaleidoscope Sound, a 2,000-square-foot commercial facility located near midtown Manhattan. Randy says:

After nearly completing Richie Haven’s last production, we wanted to do final mix tweaks in a larger room. Calling around to find a facility where we could bring the near-final mixes as Digital Performer files was educational at best and profoundly frustrating at worst. The conversations generally went something like this: “I am interested in bringing my Digital Performer files into your room to listen in a different environment and make final-mix adjustments.” Long pause, followed by a few hmmms and errrs, as they quickly realized that they could also charge me to transfer our tracks into their Pro Tools system, and it would work fine (for them). I then had to calmly, yet firmly, remind them that Ray Dillard and I had mixed to the point where we really just needed a day of final tweaks in a new environment. I was very aware that with their plan A, I would lose all of our mixing automation, be forced to commit to all of our edits and comps through merging sound bites and lose all of our plug-ins — i.e., remix the recording.

Their plan B was, “Why not take it to a transfer house and just dump it onto ADAT or Tascam and bring that?” Does it seem strange to anyone else that I would have to transfer everything back to a 10-year-old digital system in order to work in a “state-of-the-art” studio?!

We did eventually find a studio (World Beat in Manhattan) willing to let us bring our entire DP rig into their room and “tweak away.” We got what we wanted and so did they — a booking. We are determined to learn from our negative experience when we build our commercial studio. We believe a commercial facility that provides flexibility of formats (combined with a pro monitoring environment and an arsenal of vintage gear) will be competitive in the New York City market.

Studio A will be based around a Pro Tools|HD3 system and a custom API analog console. A sidecar computer will always be ready to run DP, Logic, Cubase, Nuendo or Reason with a PC to run GigaStudio, Acid and VST Instruments. Why run the applications on separate CPUs, instead of turning the Digi hardware over to DP3 or Logic? Stability and flexibility, plain and simple.

In our experience, DP is a very stable platform until you run it under DAE or until you introduce a ReWire scenario with a VST wrapper. Until I have a quad-processor G10 that also answers calls, gets coffee and books sessions, I will always be able to slow down, tax, crash or otherwise compromise a single CPU by the time I am done with a project.

If a client comes in with any of these formats and plugs in a drive, he or she will have access to a room large enough to track a small orchestra and still be able to go home with his or her files — intact and without paying to transfer them.