Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Is It Time to Go Soft?

Following up on last month's column (which you can read at mixonline.com), I've been looking heavily into software synthesis. This semester, I built a

Following up on last month’s column (which you can read at mixonline.com), I’ve been looking heavily into software synthesis. This semester, I built a new studio for students that’s largely based on soft synths, so I’ve been trying a bunch of them out. I also heard a really interesting lecture on the subject by my friend Michael Bierylo, who teaches at the Berklee College of Music, where students are getting into software-based synths.

It would seem that a lot of composers, and educators, are interested in getting rid of their synth hardware. Why? Pretty much the same reasons so many mixers and post-production engineers are moving toward host-based DSP: It’s cheaper, it’s portable, and it’s easy to upgrade and update. Even more significant in the synthesis world, however, is that software-based synths let you see everything you’re doing, which isn’t the case with most hardware instruments.

No one would tolerate a tape deck that only lets you look at the level of one track at a time, or a console that forced you to dive down through a bunch of menus to tweak the EQ on a channel. (Although this wasn’t always true — anybody remember the Yamaha DMP7?) Hardware synths, with few exceptions, force you to view their world through a tiny LCD window, a few parameters at a time. To get the overall picture of a sound requires a computer and dedicated editing software, not to mention someone to write the code. Because a modern computer can do all of the DSP to actually create the sound anyway, why not forget the hardware synth entirely?

There are other advantages to going soft. Unlike some hardware synths that promise “open architecture,” software-synths let users choose nearly any type of studio configuration they want. Programs like Reaktor and Reason let you build just about any type of synthesizer or studio configuration you want. Adding more modules doesn’t take up any additional space on your rack or patchbay, so you can keep dozens of the things on hand, whether you use them every day or once a year. If you want to try a new software product, you don’t have to convince some store’s assistant manager to let you borrow a keyboard for a week; just download the demo version from the developer’s Website and fool around with it for as long as you like. (It would be nice if more of those trial versions allowed patches to be saved — some of those early experiments you do before you buy the thing would be instructive to go back to and use later on.)

And, of course, there’s cost, both for the initial investment and upgrading. Software synths and samplers go for anywhere between $20 and $700, while hardware synths start at the latter figure and go north from there. When a new version of a software module comes out, you can usually download it and pay just a small fee. There’s no need to buy a new EPROM, open up the keyboard, pull out the old EPROM, and then wrestle with a grounding strap while you pop in the new one, ever watchful that you don’t bend any of the pins.

So are there drawbacks? Yes, there are, and most of these revolve around the fact that you’re putting all of your synth eggs, as it were, in one basket. And that basket is not infinitely large, or infinitely flexible, or even remotely bulletproof.

Computers, of course, crash. When you’re running an audio or MIDI sequencer and the computer crashes, you lose everything you’ve done since you last saved the session file. If you’re careful, that should be just a few minutes’ worth of work. (I make it a rule for my students to save their work every 15 minutes — a rule that would save me a lot of time if I followed it myself!) The rest of your studio — the synths, the processors, the console, the hard drives with the raw audio — doesn’t go down with the computer (you hope). But if you’re running, say, five different synthesis programs simultaneously, then you have to have the discipline of a Trappist monk to remember to save all those files (which, of course, you have been constantly tweaking) every few minutes, and to be able to deal with the disruption to your creative flow as your brain switches from left-dominant to right-dominant.

You could automate the process so you don’t have to interrupt yourself, but then you lose control over what gets saved — and in my experience, knowing when to “lock down” a patch and when not to is really an integral part of the creative process. I’d hate for a macro program to take that away from me.

Every time Apple or Microsoft comes out with a new operating system, they promise that “This one doesn’t crash!” because it uses protected memory or some other mechanism. And perhaps if all you’re doing is database management, Web serving or desktop publishing, it can be pretty stable. But the kind of developers who are drawn to writing software synths are always looking for ways to push the edge of the envelope for every operating system they encounter, and they don’t always strictly follow the rules. You can be sure that some day, something those folks do is going to cause OS X or Windows XP (or OS XXXI or Windows XZ.2100) to crash.

In addition, software synths have to work and play well with other applications on the same computer, which might include video, graphics, communications or other time-sensitive tasks. Even if your computer is running twice as fast as the one you had last year, the video program you’re using (because video software developers like to take advantage of faster processors, too) is probably using up twice as many CPU cycles, so you actually haven’t gained as much as you might like.

And besides competing with each other for CPU time, these programs all have their own peculiar way of interacting with the operating system, and those ways may not all be friendly with each other.

Another issue that comes up when you have multiple synthesis programs on one platform is how they talk to each other and to other audio programs. In the hardware world, there are standard analog and digital audio cables for wiring things together, and usually the hardest issue you have to deal with is which device is the clock master. In the computer-based studio, there are a host of different ways to pass audio among synths, sequencers, mixing programs, sound cards and audio interfaces, like (to name just a few) Steinberg’s VST and ASIO, Digidesign’s DirectConnect and Propellerheads’ ReWire. Which of these you use depends on what kind of hardware you’re running, and whether you have a “host” program that demands you use a certain protocol. Synth-module developers tend to include drivers for as many of these different systems as they can, so as not to discourage any potential purchasers, but some, like Seer Systems and Nemesys, have their own protocols.

While dealing with audio routing can get pretty clumsy, the MIDI side is even more fussy. On the Mac, the most common way to pass MIDI from a controller or sequencer to a soft synth is to use OMS’s InterApplication Communication (IAC) protocol. But OMS, itself, is no longer supported, because Opcode, the company that created it, effectively no longer exists. OMS doesn’t work at all with Apple’s OS X, and the promised “MIDI services” that OS X is supposed to have are still not available. Mark of the Unicorn’s FreeMIDI has never particularly caught the attention of other developers, but it does work with most applications — when it is running in “OMS Emulation” mode!

The scene on the Windows side is slightly less confusing: Freeware programs such as MIDI Yoke can handle MIDI connections between software modules, but there is still no standard Microsoft-blessed way to hook them up.

Then there’s the question of the user interface. The big trend in hardware synths these days is knobs, knobs and more knobs; the “analog” model of “tweak a knob and something cool happens” is back. Even though it sometimes seems that these knobs are being used mainly to do resonant filter sweeps and little else, live interaction with synthetic instruments is always to be applauded — if for no other reason than it can restore the spontaneity missing from so much of what passes for music these days.

But where are the knobs on software synths? There are plenty of onscreen controls, but they’re just virtual — you’re still stuck with a mouse as your primary input device, which is an even clumsier way to interact with a musical instrument than it is with a mixing console. There are a few general-purpose MIDI controllers available that lend themselves to this use, like Keyfax’s Phat.Boy, Midiman’s Oxygen 8 and Kurzweil’s ExpressionMate, but surprisingly few of the software synth packages make it easy, or even possible, to customize how incoming messages are mapped to synth parameters. Koblo’s Tokyo synth, for example, allows for MIDI control on most parameters, but the mappings are fixed — which means that the Phat.Boy, where output mappings are also fixed, wouldn’t be able to do much. At the other extreme, Native Instruments’ Dynamo allows any controller number to be mapped to any parameter, but it doesn’t respond at all to keyboard aftertouch!

Perhaps the biggest issue confronting software synthesis is latency: how fast the module responds to incoming MIDI data. There is always going to be a delay between a key strike (or string pluck, or blowing into a tube, etc.) and the sounding of a note, and it’s an important consideration in any musical instrument design, electronic or acoustic. Aural feedback is crucial in producing real-time, musical-sounding tracks, and if the feedback is delayed too long, then it feels unnatural.

Granted, some delay is often expected. Woodwind players know that there will be a brief interval between when they start to blow and when the sound comes out. Furthermore, that interval will be different for different pitches, because low notes take longer to speak. So they adjust their playing accordingly, perhaps starting the first note in a phrase a little early.

Even keyboard players expect some tiny delay. There was a story going around in the early ’80s about what happened when Yamaha showed a prototype of the DX7 to some French pianists. “The keyboard is too soft,” they said — and, of course, compared to what they were used to playing, the non-weighted keyboard of the DX7 must have felt pretty wimpy. But when Yamaha engineers changed the velocity curve, making it a little harder to play, and added a slight delay to the attack, the pianists pronounced the action much more to their liking — although physically it hadn’t changed at all.

What causes latency in a software system? Simply that the computer is not infinitely fast; it has to read the incoming MIDI note, send it down a virtual wire to the synth, calculate the waveform or drag it out of RAM or off the disk, and send it to the sound card. At the same time, it has to refresh the screen, poll the computer keyboard and run whatever other programs are in use. Just the overhead needed by the operating system can be a significant drain on the process. And, obviously, the more things you have going, like video, hard disk audio or even timecode, the worse the problem is going to be.

The theoretically fastest response of a hardware MIDI synth is 1 millisecond. That’s how long it takes a 3-byte MIDI command to be received. If you consider that the speed of sound is about 1 foot per millisecond, then that much of a response delay means that the instrument will sound like it’s one foot away from you. Longer delays are common (a synth’s onboard CPU isn’t infinitely fast either), but like woodwind players, keyboard players learn, in most cases almost instinctively, to adjust. In the hardware-synth world, response times of up to 10 or even 15 ms are not unheard of.

A delay of more than about 20 ms, however, will feel clumsy to just about anybody. As the designer of one pioneering late-’80s software-synthesis system put it, “Real time is not negotiable!” (His product, sad to say, never made it out the door.) And if there are other tracks that need to play in sync with the synth tracks, like from hardware synths or internal or external hard disk audio, then one or more of the tracks have to be offset to compensate for the delay. Some programs can figure out how to do this automatically, but even they are not going to work in all situations.

It gets much worse if the response time of the synth changes. If the latency is not consistent, then no matter how good your fingers are at compensating, it’s just not going to come out right.

In software synths, latency can be made constant with good programming, but it takes a lot of horsepower — both CPU speed and RAM — to get it down to really low levels. Most programs allow you to set up a buffer in RAM to help keep the load on the CPU down. But there’s a trade-off: The larger the buffer, the higher the latency.

A buffer is just that: a place where the samples can be loaded before they are spit out. A large buffer means that the CPU doesn’t have to be constantly generating audio data, but can “batch” the samples and attend to other chores according to its own schedule. But filling up a buffer takes time, and that’s where the latency can get larger.

If you lower the buffer size, then it fills up faster, but now the CPU has to pay more attention to how tightly it’s generating the audio data. The result is that other tasks — most notably streaming audio from the hard disk — have less CPU time available to them and can get literally choked off. Those who have tried to integrate software synths into hard disk audio environments are all too familiar with the dreaded “Your CPU isn’t fast enough” message when playing four tracks, even though the same machine can play 12 or 16 tracks perfectly when it is just doing audio. (Michael Bierylo has done some interesting experiments in this area, which you can see at http://people.berklee.edu/~mbierylo/MarkWorld_2001/markworld5.html.)

As Michael says, “There’s a lot of smoke and mirrors” when it comes to software-synth performance. There are, as of yet, no standards, no benchmarks, no way to really measure one program or system against another. Worse yet, because different modules are “optimized” for different operating environments, there can be a big change in the way a given module behaves on different platforms: In a TDM host, for example, with its dedicated processing hardware, a software synth may be much speedier than the same software operating as a plug-in to a native VST program.

Finally, there’s the question of obsolescence: What happens when the platform your favorite software synth runs on no longer exists? One of my favorite all-time electronic-music teaching programs is Turbosynth, from Digidesign. Although it looks crude by today’s standards, Turbosynth actually has a lot of very cool features (like evolving wavetables) that few other developers have put into their much-snazzier packages. About two years ago, I got what may have been the last five copies out of their warehouse to use in my classroom. It runs on OS 9, but I very much doubt it will work on OS X, so in a year or two I’m going to have to find something else to demonstrate the basic properties of sound.

At least with hardware synths, if you really need a particular model, then you can usually find one somewhere. Dozens of concert music composers wrote pieces throughout the ’80s and into the ’90s with a DX7 as part of the ensemble, and although the thing has been out of production for over 15 years, finding one for a performance is not hard.

Here’s an example from my very own Closet of Obsolete Technology. There are two products in there that were manufactured the same year: a Casio CZ-101 and a floppy disk containing a wavetable-editing program I wrote for the alpha Syntauri computer music system. The Casio cost about $600. If I take it out of the closet (and assuming I can find the power supply), it can still do everything it always did, despite a couple of broken keys. And if I want to use those sounds in my studio, I can just hook up MIDI and audio cables, and I’m in business.

The alphaSyntuari cost about $3,000, including the Apple II+ I had to buy to run it. I long ago sold the Apple II, and now the only place you’re likely to find one of those is in an under-funded public school classroom. Even if I could get the computer, and the special audio cards that were needed (whose manufacturer is long gone), and a monitor, and a 5.5-inch disk drive, and the dedicated 61-note keyboard, I still couldn’t use the synthesizer with any other piece of hardware, because the company died before they could finish implementing MIDI into the system.

So we’re still a ways away from being able to ditch all of our synth hardware. But that doesn’t mean it won’t happen. Many of these issues would be straightened out “if,” as Michael Bierylo puts it, “manufacturers agreed on a standard architecture. Then one could buy a computer, software and third-party DSP from whomever they like, and configure and scale their system according to their needs and budget.” To which he adds, “In your dreams, pal!”

To which I add, “Stranger things have happened!” It’s a worthy goal. Let’s see if there’s enough cooperation and communication within the software-synthesis community to make it happen.

Paul D. Lehrman is a composer, teacher, consultant and Mix‘s Web editor. In his spare time, he likes to rummage through his closet looking for old stuff that still works.

Close