Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Virtual Instruments—Virtually There

WHAT'S MISSING IN SOFT SYNTHS

This month, I expect to be helping to cut the ribbon on a brand-new lab in a brand-new music building at my school. It will be equipped with a dozen Intel iMacs, each loaded with six different software packages for recording and editing music, audio and video. The keyboard at each station will have a dozen or more knobs and sliders, and the synthesizer will be — well, actually there won’t be any of those.

Illustration: Kay Marshall

Five years ago, I wrote in this space that software synthesis was threatening to overtake hardware. Now, at least in the case of this lab, it’s a fait accompli. All of the sound generation in this lab will be in software form.

The all-software studio has become eminently practical for a great number of people in a great number of contexts. Developers ranging from kids working in their bedrooms to multinational corporations are coming up with new virtual instruments and processors for every conceivable purpose, imitating old tools and designing startling new ones. The quality of the sounds and the interfaces ranges from awful to brilliant, and the prices range from zero to, well, still a lot less than you’d pay for a top-tier hardware synth. And the correlation between price and quality is by no means linear.

Format wars are settling down, as developers resign themselves to the fact that they have to release their products in VST, Audio Units and RTAS versions — but no more than that — if they want to effectively cover the market. We’ve got every musical instrument that can be found on the entire planet, as well as just about every instrument that ever could be found on the entire planet. And a lot of instruments from outer space, too.

On the processing side, we now have models of all the great preamps, compressors, EQs, guitar amps, tape heads, stomp boxes and delay/phaser/flangers (with the exception of the late Stephen St.Croix’s amazing Marshall Time Modulator — anyone working on that?), and we have processing that the designers of those classics could never have dreamed of.

A very welcome side effect of the soft synth revolution has been the long-overdue development of sophisticated yet inexpensive MIDI controllers, including decent keyboards with oodles of user-configurable knobs, switches and buttons, from companies such as M-Audio, E-mu, Novation and the impressive Chinese upstart CME, and percussion pads that will set you back only a Franklin or two (although for serious rhythm work, I would never give up my Kat!).

So are we done now? Can we throw away our hardware synths and processors and make everything work inside the box? Putting aside the highly subjective issue of whether software tools sound as good as the hardware they’re designed to replace, the answer, despite my school’s virtual plunge, is still: not quite yet.

First of all, despite the massive increase in computer processor speed, the problem of latency has not entirely been licked. Timing is, of course, something musicians are very sensitive to. Hardware synths are still generally two to three times faster in “local” mode than when they send out a MIDI command to trigger another device. In a soft synth environment, rather than being predetermined (and hopefully minimized) by a hardware engineer, the latency is dependent on a number of factors outside of the designer’s control, including the speed of the audio hardware interface and the size of the computer’s RAM buffer. The larger the buffer, the better the system will behave (fewer disk errors and dropped samples), but latency will be longer. Today’s computers and professional audio interfaces can handle a pretty hefty sonic workload with a buffer of 256 samples — which translates into just less than 6 ms. That’s pretty good, but it’s still two to three times as long as a fast hardware synth being driven by MIDI, and maybe nine times slower than a synth in local mode.

Worse still, latency is not necessarily consistent, and that can really throw off a player. According to computer-music guru Hal Chamberlin, “In a software synth, you’ll probably find that there is a great deal of seemingly random variation in the note start times. A good test is to play several notes and hold them with the sustain pedal, and then measure the time it takes for a new note to be added to the mix. In a hardware synth, those other active voices probably won’t have any effect on the latency of new notes. However, for soft synths, the more voices already playing, the more variation you’re likely to encounter.”

Another problem is that computers are simply not built to take the abuse that musical instruments are. A lot of performers who work with computers travel with two for just this reason. If a synth module fails while you’re on the road, you can often find a music store in town that can lend or rent you one. If your computer fails, maybe you can run over to Best Buy and get a replacement, but how are you going to reload and reconfigure all of your software in time for the gig?

A few companies have attempted to solve this by making computers that look and behave like musical instruments. Open Labs’ Neko and Miko workstations cram a Windows XP PC, a touchscreen, and alphanumeric and music keyboards into a single box. Some alumni of Eventide Clockworks started a company not long ago called Manifold Systems to build Plugzilla, a PC-style computer in a roadworthy, two-unit rackmount box that could host up to eight VST modules — instruments and processors — at a time. Despite a favorable review by Michael Cooper in the March 2005 Mix, it never took off and the company has shut down.

The most successful of these solutions, it seems, is Receptor from Muse Research, a company founded by veterans of firms Opcode, Passport Designs and E-Mu. Receptor, like Plugzilla, is basically a PC in a relatively rugged 2U metal box. It uses an AMD processor running under a custom Linux-based operating system, which has none of the overhead that a run-of-the-mill Windows machine has, and thus avoids the latencies normally caused by the multitasking processor in a PC: “We think of it as an instrument, a platform for plug-ins, not a computer,” says VP of marketing Bryan Lanser. “Linux lets us create our own interrupt routines and decide what we want to listen to. There are no e-mail ‘dings’ or printer queues, and we can organize the video, disk access and keyboard scanning routines so they don’t get in the way of the sound-generating processes.”

Scott Shapiro, a busy New York composer, has become a big fan of Receptor. Although the box only has two analog outputs, it has an ADAT Lightpipe port, and Shapiro uses that to send the signals into his Pro Tools rig. “It’s never crashed,” he says. “I just leave it set up with eight or nine plug-ins, and when I make changes, I rename the template and save it. It’s all recallable, instantly. I don’t have to worry about having the right bank of sounds loaded in my synths. I remember the days with racks of gear when you’d go away, and when you came back to it, everything would sound different. But now with Receptor integrated into a Pro Tools session, the instruments, the levels and the EQ are exactly the same every time.”

Receptor has slots for 16 virtual instruments, and for those who need more, Muse Research has developed Uniwire, a protocol for connecting multiple Receptors to the host and to each other using Ethernet cable to carry both MIDI and audio. Uniwire has one drawback, however: It adds latency to the system. Because the signal travels in two directions — MIDI and clock out and audio in — the latency is, in fact, two times the host computer’s buffer setting, and the more boxes you add, the higher it gets. ”When people are tracking,” explains Lanser, “they turn Uniwire off and track using direct MIDI I/O, so there’s no latency in that direction, and then when they are rendering and mixing the tracks, they turn it back on. Then the host sequencer’s automatic delay compensation kicks in, so everything sounds right.”

But there’s one more issue with virtual instruments that is being overlooked by a lot of developers, both of the plug-ins and their hosts. It’s a two-pronged problem, so it takes a little explaining.

When MIDI-controllable hardware signal processors, such as the Lexicon PCM 70 and the AKG ADR-68K, first appeared, a lot of their appeal was that they allowed the user to map any MIDI signal to any processing parameter: A modulation wheel might control a reverb’s RT60, while a foot pedal could adjust a filter’s center frequency. You could even use MIDI notes this way: One of my favorite patches on the ADR-68K used MIDI note numbers to control the delay time of a flanger, so you could literally play the comb filter effect from a keyboard. Today, most high-end keyboard synths with onboard processing let you do the same thing.

Some soft synths, like Reason, have MIDI controller numbers assigned to just about every knob and switch in the interface. In Receptor, the operating system lets you map up to 16 incoming MIDI controllers to each plug-in’s parameters. (“I don’t think anyone needs any more than that,” says Lanser.) Other virtual synths, like some of the native instruments in MOTU’s Digital Performer, let you configure them according to your own needs using a MIDI Learn feature.

But developers who work in the VST or Audio Units environments don’t usually include that functionality. Instead, they rely on the specifications’ requirement that a plug-in “publish” its various parameters so that a host program can access them by name, and they leave it up to the host program to establish communication with the plug-in. Unfortunately, some host programs don’t handle this as well as they might. In the case of Digital Performer, when you are using plug-ins from other manufacturers, you can draw in parameter changes in the sequence editor, but if you want to play the parameters live, you need to use the software’s Console function, which lets you create virtual knobs, sliders and buttons, and assign their inputs and outputs to MIDI controllers. Missing from the console’s output list, however, are the plug-in parameters. In other words, though you can automate parameters in a virtual instrument or processor, unless they can be assigned a specific MIDI controller, you can’t play them. And if you can’t play with the knobs, it’s not much of an instrument.

The other side of the issue is that when the DSP becomes sophisticated enough, as it certainly is in many plug-ins, the distinction between an “instrument” and a “processor” gets extremely fuzzy. The other day I heard on the radio David Bowie’s 1975 hit “Fame,” which features that wonderful descending scale of his voice singing the title over two octaves. That was done with an Eventide 910 Harmonizer. Today, would we call that a processor or an instrument? In the software world, developers often define these categories by the kind of MIDI data they respond to: Instruments primarily receive notes and some controllers, while processors only receive controllers. But these distinctions impose limits that maybe shouldn’t be there. GRM Tools has a terrific 31-band equalizer that the company boasts could be a “performance instrument.” Yes, it could — but the best way to do that would be to let the user play it from a keyboard, and as far as I can figure out, there is no host that can make that happen.

These aren’t difficult obstacles to overcome, but they are important ones. So while the changeover to the all-virtual studio has moved along quite a bit in the past five years, it isn’t quite finished. There are still some lessons that software designers can learn from the hardware world having to do with flexibility, playability and the fact that, as one memorable (but unsuccessful) instrument developer in the late ’80s put it, “Real time is not negotiable.”

Paul Lehrman teaches music technology at Tufts University. A collection of his writings, The Insider Audio Bathroom Reader, is now available from Thomson Course PTR, mixbooks.com
and
insideraudio.com.

Close