Composers have long been fascinated with using electronic instruments to simulate real orchestras. And now, as music budgets for films and television programs shrink, more producers and composers are looking for computer tools that can do the job of performing orchestral scores, and can do it better. The success of high-priced orchestral sample libraries that can fill the better part of a terabyte drive is testament to this approach’s popularity. Still, the number of people who can use these libraries to their fullest potential remains pretty small.
Despite the incredible amount of thought and engineering that has gone into them, most sample libraries require a long learning curve, careful resource and disk-space management, and even more of the kind of left-brain/right-brain split that can interfere with creative flow. And that’s not to mention the sheer time and expense involved in upgrading when the next set of sample disks is released.
As a one-time orchestral player and a composer who leans toward classical sounds in my film scores, I’m always on the lookout for tools that will let me put human-style expression into electronic performances. Thanks to a new program from a tiny company that is beginning to get some serious attention, my goal may be much closer. The company is called Synful, and the product — which is just the first of what is hoped will be an extensive line — is Synful Orchestra. In its earliest incarnation, which came out at the end of 2004 for both Mac OS X and Windows, Synful Orchestra was capable of playing standard orchestral sounds — strings, woodwinds and brass — but only solo instruments; there were no string or horn sections. Early this year, though, a new version was released that includes string sections, and it takes the program to a whole new level.
Synful’s chief cook and bottle washer, Eric Lindemann, is also a pianist and an experienced ensemble player, and he has obviously been thinking long and hard about this issue. Lindemann’s resume reads like a history of late-20th-century music: He toured with the 5th Dimension, studied composition with Olivier Messiaen, played on the soundtrack of the first Star Trek movie, spent time at IRCAM with Pierre Boulez working on computer music engines and helped design the LinnDrum and the Wave-Frame. He’s also designed console automation systems and hearing aids, and has his name on some 15 patents, three of which are integral to his current project.
Synful Orchestra, which comes in the form of a plug-in for AudioUnits, VSTi and DXi-compatible hosts, is not a huge set of samples. In fact, there are no samples at all; the whole program is only 58 megabytes, which means anyone with a decent Internet connection can download a whole new version in a few minutes. But the sounds are derived from real instruments, and recording a wide variety of human orchestral players was a crucial step in Lindemann’s design. Rather than playing back sampled recordings, however, Synful Orchestra uses the recordings as models to construct instrumental sounds using a form of additive synthesis.
“It’s analysis resynthesis using sine waves,” Lindemann explains. “You need to synthesize about 100 harmonics on each sound to cover the whole audible range. Each harmonic has a time-varying amplitude envelope, and the envelope segments are computed about every 10 milliseconds. I call it ‘additive coding.’ Like MP3, it’s a way of coding the sound to reduce it in size. Representing a sound using time-varying harmonics is smaller than a PCM recording by a factor of 50.” Lindemann also uses controllable amounts of noise in the sounds — both on the attack transients and sustained segments — to simulate bowing and breathing. And because the harmonic and envelope calculations are independent of the pitch, there’s no need to build a complete model of every different note on a given instrument.
The models were built from hundreds of recordings, but the players in Lindemann’s studio didn’t come in and methodically play single notes. Instead, they performed whole phrases from the orchestral literature. “If you ask people to play individual notes or even intervals, you get mechanical, not expressive playing,” he says. “If they play real passages, they play expressively. I take the passages and break them down, and annotate and categorize them by descriptors — what note, what interval, what kind of dynamic shape and articulation is being played — and put them into a database.
“I extract from the database a rule for generating a good basic timbre based on pitch and loudness,” he continues. “Then what is actually stored in the database are fluctuations around that basic timbre. When the program searches the database, it looks for a sequence of fluctuations appropriate for a certain note. It may not find exactly what it needs in the database, so it has a method of searching for the best match based on weighted criteria, which it then hammers into shape.”
Another unique aspect of the program is the emphasis on the transitions between notes. “A lot of the character of the instrument is the nature of the transitions,” he says. “For each instrument, there’s a vocabulary of transitions. When an oboe player leaps a major seventh, for example, all of the work of playing the second note actually occurs in the first note. That’s what I want to capture. So when I look in the database, I’m looking for the right transition. I’m looking from the middle of one note to the middle of the next note.
“It took me several years of research to get to the point where I could stitch together these pieces so that they sounded coherent. You have to smooth out the timbres since you get different timbres from different recordings so that they sound consistent. You have to scale the intensity of attack. I’m still learning about how big the database has to be.”
Synful’s use of MIDI expression also helps make the sounds expressive. Volume is treated like a fader on a mixing board, but expression changes not only the amplitude, but also the harmonic content of a note as it sustains, the way bowing or blowing harder or softer on a real instrument does. MIDI velocity also changes the timbre, but its effect is concentrated in the attack portion: “Raise the velocity, and I will find something in the database that will have a sharper attack,” explains Lindemann.
Vibrato is another factor that the player can control. “Vibrato is stored in the recordings,” says Lindemann, “but unlike a sampler, I can isolate the natural vibrato and bring it in and out with a modulation wheel.” Right now, only vibrato depth is under user control, but Lindemann hopes to also tackle speed in a future release.
Another interesting way that Synful takes advantage of MIDI is a special mode for dealing with pitch-bend information to create portamento, or glides. In a standard MIDI instrument, portamento is specified in terms of rate: how many half-steps the pitch will glide in a given unit of time, which doesn’t change according to the starting or ending points. But real instrumentalists could never be restricted to a fixed slope like that; each time they use portamento, the speed is likely to be different. So when Synful Pitch Wheel mode is enabled, if it detects two notes played legato, then MIDI pitch-bend commands will change the pitch of the first note, but the second note will be unaffected.
Synful Orchestra also includes a unique way of localizing its sounds in the stereo field. Lindemann has built in a finite set of early reflection parameters that can be used to define a room, pinpoint an instrument’s location on a stage in two dimensions and specify the listener’s position within the space. These reflections provide a more believable image than simple MIDI pan (although that’s an option, too) as Lindemann’s models take into account interaural time differences, not just relative levels. The documentation correctly says that Synful’s localization modes are not a substitute for a real reverb, but instead should be used with a digital reverb.
The new Section feature — which for now is just strings — is impressive in both its simplicity and its flexibility. Synful builds its sections from the ground up, out of solo instruments. The user can specify how many players are in each section, how physically spread out they are and how tight their timing and tuning are. Perhaps most importantly, every instrument in the section uses a subtly different synthesis model: “Whenever I search the database for transitions for a single note,” Lindemann explains, “I find the best one, but at the same time, I also find the 10 best ones. With a solo instrument, I discard the others, but in a section, I will use all 10 of them, so each ‘player’ is playing a slightly different transition. I can also use different vibratos on each instrument on long notes.”
The localization algorithm takes the multiple-player effect into account, as well. “The early reflections are different for each player, depending on their position,” Lindemann says. “The last player in a section will be close to a wall, so even though his direct sound gets to the listener later than the first-chair player, his first reflection arrives before the first reflection of the first chair, who is farther from the wall.”
Another clever trick is using divisi, when players within a section are asked to play different notes. “With a sampled string section, when you play a divisi, all of a sudden, you have twice as many players,” says Lindemann. “But I literally split the individual players up, so different ones are playing different notes, which is much more realistic.”
There’s just one problem with all of this. When a human oboe player has to jump from a low C to a high B, she knows it ahead of time because she can see it coming up in her part. But how do you get a computer to do that? Lindemann’s answer to this is to build in a switchable delay: When you are playing tracks back, you can delay the sound by one second, which gives the engine the ability to look ahead and make the necessary decisions. Of course, this doesn’t work when you’re laying down tracks, so you do the best you can putting in the expression and articulation you want, and hope it all works when you play it back. And in fact, it does work amazingly well, although it takes some getting used to at first and is probably the trickiest part of using the program.
Things get more complicated when you use Synful Orchestra with other sound sources whose approach to time is a little more conventional. In that case, if you’re going to use the “Delay for Expression” feature, then you have to advance all the Synful Orchestra tracks by exactly one second in your sequence. But that can be clumsy, especially if you have a lot of tempo changes in the file. For that reason, Lindemann is talking to sequencer manufacturers about building a controllable look-ahead feature into their products. “It’s the same as delay compensation on an audio track,” he says. “The sequence is just sitting there in memory, so there’s no reason why Synful can’t query the sequencer as to what’s coming next. It’s not technically hard at all. But as of now, no sequencers do it.”
There’s another problem: Synful Orchestra needs a boatload of CPU power, especially when you are using Sections. My venerable 800MHz G4 Mac did okay with the older version of the program when I asked it to handle about 10 solo instruments, but when Sections came out, I couldn’t even play two tracks of violin sections without the computer falling over. I’ve since upgraded to a dual 1.6GHz configuration and it’s much better, but I still bang up against the CPU’s limit when I try to play a complete orchestral score with good-sized sections. One workaround, according to Lindemann, is to freeze finished tracks by turning them into audio and that can certainly help, but I don’t like working that way when I’m composing. Well, hopefully the next computer I buy will have the muscle to handle a full-sized Synfuls symphony without breaking a sweat.
Lindemann is currently in the process of putting together an all-new database of recordings and models, and will add muted strings and brass, pizzicato strings and the more unusual members of the orchestra when he releases it as Version 3 later this year. Other Synful products coming down the pike are jazz, rock and what he calls “fictional” instrument sets. “Individual jazz players have distinctive sounds, so the jazz instruments will emphasize player styles more and be specific to real musicians,” he says. As for the fictional set, “The plan is to fan out from classical instruments — where you have a clear expectation of what that instrument should sound like — to imaginary instruments, hybrid instruments, that would exist in a fictional universe. I would use the database I have now so that these instruments would have an organic base, which would give people a reference for what they were hearing.”
Paul Lehrman also has a lot on his plate.