In last month’s “Insider Audio,” I began a discussion with legendary composer and music technologist John Chowning, which sprang from a conversation I had with him at last fall’s AES conference about the future of electronic musical instruments. When we left off in the story, our hero had invented FM synthesis, gotten Yamaha interested in the idea, published several pioneering papers in the AES Journal and founded the Center for Computer Research in Music and Acoustics — the legendary CCRMA — at Stanford University. Oh, yes, and he’d been let go from the Stanford faculty.
“I understand why they did that,” he says now. “Except for Leland Smith, I think it scared the music faculty a little bit: the idea of machines in this deeply humanistic department full of musicologists.”
But meanwhile, by the late 1970s, Yamaha began to get very serious about building digital synthesizers using FM technology. The company had put together a couple of prototypes called “MAD” and were working on what was going to be its first commercial FM synth, the fantastically complex (and expensive) GS-1. So Yamaha came back to Stanford looking to extend and make exclusive the license it had bought for the patent that Chowning had created and had signed over to his then-employer. Only Chowning wasn’t there: He’d been invited to do an artist-in-residency in Berlin (arranged by famed composer György Ligeti) and was also asked by Pierre Boulez — whose concerts had introduced Chowning to electronic music while he was a graduate student in Paris — to help design the new French government musical research center, IRCAM. It was, no doubt, a bit of an embarrassing moment for the university.
Chowning hadn’t completely severed his ties with Stanford, however, and in 1975, had come back to CCRMA as a research associate to work on a piece that IRCAM commissioned. And a couple of years later, he was given an offer to return to academia. But it wasn’t from Stanford: The University of California wanted to appoint him as a full professor. Stanford, finally realizing what it had lost, asked him to come back with tenure. “It was the only time they had ever let a junior professor go,” he recalls with a laugh, “and then hired him back.”
The economics would soon make the wisdom in Stanford’s decision clear. Yamaha’s first popular FM synth, the DX7, came out four years later and sold something like 180,000 units, which was an order of magnitude more than any synthesizer had sold before. FM technology remained at the center of the company’s electronic keyboard line, including home organs and pianos, through the TX, TG and SY Series for well into the next decade. The royalties received by Stanford for Chowning’s patent totaled $22.9 million, making it the third most lucrative patent the university ever licensed. (Number two on that list is the gene-splicing technique for building recombinant DNA, and number one is a text-searching technology dreamed up by two graduate students that is now commonly known as Google.) Even though the patent expired in 1995, FM synthesis is still available as an option on Yamaha’s current flagship synth, the Motif.
Those who were around at the time have their own ideas about why the DX7 was so popular — and all of them are right: The instrument was groundbreaking and amazingly useful in many ways. But Chowning’s thoughts are a bit different, and they cast an interesting light on what makes for a successful electronic musical instrument.
One of the primary goals of a new instrument, he says, if it is to be successful, is that it be able to sort out the good players from the not-so-good. “Two of the most enduring electronic instruments are the Hammond B3 and the Rhodes,” he opines. “That’s because they have unusual acoustic attributes: They have instantaneous attacks, which pianos don’t. So they offer rhythmic precision that someone like Jimmy Smith can take advantage of. That has real musical consequences and it reveals the deficiencies in lesser performers. The same thing is what was important about the DX7: It gave really good keyboardists expressive control that a keyboard without velocity sensitivity wouldn’t have. Velocity is one of the things that pianists spend thousands of hours learning how to control. And when you coupled the velocity sensitivity to the modulation index, it gave a dimension to the timbre, not just the loudness, that was different from earlier synths and which our ears are very sensitive to.”
Chowning recalls that soon after the DX7 was introduced, English musician David Bristow, who was one of the primary sound designers for the company (and still is, although his current work is on ring tones), did an experiment that showed how important minute timbral changes could be to a musician. “I was working with him in Paris at the time,” Chowning says, “writing our book [FM Theory and Applications, a seminal tome published by Yamaha]. He convinced professional keyboard players that he was changing the action and the keyboard sensitivity on a DX7 and getting their reactions. Actually, all he was doing was increasing or decreasing the amount of what he called ‘stuff’ during the attack: the noise. It was an impression based entirely on acoustic feedback; he did nothing to the keyboard at all.
“The relationship between energy, force, effort and the acoustic result is a part of all musical performance,” he continues. “More effort results in greater intensity or spectral complexity. I guess the exception to that is the pipe organ, but then again, in the early days you had this little guy in the back working the bellows. The B3 is a little different because the key velocity doesn’t matter, but in that case, the precision of execution really does. So if you have a synth with both a sharp attack and velocity sensitivity, good keyboard players can get a high degree of expressive control out of it. So it reveals virtuosity, or lack of it, and separates out the really good performers from others.
“That’s also why the WX7 [Yamaha’s unique MIDI wind controller, which has been in production for some 18 years] works. It’s easy to distinguish between a good player and a bad one.
“Here’s what I would consider the ultimate test of expressivity. I proposed this to a concert pianist to get his attention. Now, I don’t play the piano, so if I tell him to hit a note and then I try to hit it the same way, it will take me a few times to get the velocity just right. If he plays two tones, it might take me 100 times to replicate it perfectly. If he plays a phrase, just four or five notes, I’m lost. I could never do it. I could never convince a listener that it’s him and not me. It’s not in my hands, it’s not in my training. So we need to look for instruments that expose that kind of technique, that have richness and can reveal virtuosity and expressivity. Those instruments will find users who will be able to highlight some or all of that expressive neural-motor connection.”
Chowning doesn’t think that breakthroughs in new instruments will look entirely different from what we’re familiar with. But he does think that musicians can be encouraged to experiment with new techniques, as long as the encouragement is given in the right way. “Controllers that make use of existing technique ought to be the top issue,” he says. “I would look for instruments that play upon instruments we already use. Piano, cello and violin are the three great virtuoso instruments — that’s what kids learn to play. If you’re looking for a population willing to be experimental, you’ll find them in those three groups. And also wind players and horn players.
“For example, you could work on finding ways to use violin technique,” he adds. “Not a ‘virtual’ violin where there’s no physical object there — although we’ve worked with that and it’s interesting — but a real object that lets players slap their fingers down and touch the strings; for example, Chris Chafe’s ‘celleto.’ Some of the controllers we have today a dancer could do much better with than a musician. They have a sense of body movement that musicians aren’t trained to have.”
One instrument of the future, Chowning thinks, will be a fully programmable piano in which the soundboard and the strings — the heavy, temperature-sensitive part — disappear. “The measure would be if you could take a great pianist and blindfold him and sit him down, and he can’t distinguish it from a grand piano. And then you move him to another piano, which is identical or maybe even the same one, but you’ve changed the key reaction characteristics and the sound quality, and he thinks that he’s no longer at a Steinway; now he’s at a Baldwin.”
Another characteristic that he thinks makes for a useful new instrument is its ability to control large musical parameters, as opposed to minute ones. “The most successful controllers are those like Max Mathews’ Radio Baton or Don Buchla’s Lightning [which are both systems that track the motion of two wands in space], where a simple gesture can produce a result that has meaning at the highest musical level, such as loudness or tempo. It relieves the performer of dealing with all of the details. Think about an orchestra conductor who doesn’t know how to play the violin or the bassoon, but she animates all these well-trained machines — the players, who have spent thousands of hours learning from the masters of their instruments, going back generations. Expressivity in machines has to have this kind of top-level control.”
In addition, and this was the surprising answer that Chowning gave at the panel discussion at the AES conference, “For a controller to persist, it needs repertoire. It can be written repertoire or oral, or a tradition of jazz or folk or ethnic music. People who begin to play it have to have models of excellence or know that the music is rich because of a long tradition.” He points again to the B3 organ as an example. “The B3 could never reach the popularity of the piano because it is missing the idea that more effort equals greater volume, but it has a solid tradition in pop and jazz and gospel, and so it persists.”
Commercial manufacturers, although they have been very, very good to Chowning, are not necessarily going to be the ones to produce these instruments, in his opinion. “People like Buchla have different ultimate interests than a company like Yamaha. He senses an opportunity and builds a device that extends the performance capability in ways that performers never asked for. His nose is ahead of the pack. Yamaha, on the other hand, is looking for ways to engage the public. If in doing so they can make a more expressive instrument, that’s desirable, but they need to make money. Their grand pianos are their great tradition, and fortunately, they make money with them because if they were marginal, they’d stop making them.”
Chowning is very happy that the state of electronic music technology has reached the point that it has, just at the moment when he is able to retire from teaching and concentrate on composing. “The present is the dream for me,” he says. “It’s all software and real time and portable. I sit here with a laptop that has more power than I could ever use. With a laptop Mac or PC and a MOTU 824, it’s like I have everything I’ve ever had in all the labs we’ve ever built, in all the years at Stanford, in 10 stacks of Samson boxes [refrigerator-sized, computer-controlled synths that were state-of-the-art in the late 1970s] put together. Software synthesis is the take-off point for ultimate freedom. The only hardware devices you need are controllers; there’s no real reason anymore to build a synthesizer.
“But we still do need controllers,” he says in conclusion, “and the difficulty is how to put that extra piece, that performance knowledge, into them.” Fortunately, musical expression, according to Chowning, is not an unfathomable art, although we have much to learn. He points to studies done by a scientist at the Royal Institute of Technology in Stockholm named Johann Sundberg, an eminent researcher who, among his many accomplishments, showed why you can hear an operatic soloist over an entire orchestra. “Sundberg did some wonderful work on the voice: how the vocal tract changes, how the timbre changes, how to shape a phrase using little gradations in the intensity and the linkages with pitch glide. That’s an area that if we understood more, we could make our machines more expressive. It would be extremely enriching. Because once you understand that, you can apply it to a violin or to any other instrument. After all, the voice is the instrument of instruments.”
Paul D. Lehrman teaches a course in electronic musical instrument design at Tufts University, but knows he, too, has a lot to learn.