Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

More Than Mice

TOMORROW'S MUSICAL INSTRUMENTS AT THE NIME CONFERENCE

Last month, I wrote about a college class I teach, in which students from various disciplines get together and design new electronic musical instruments. While the course has some pretty unusual aspects, I don’t want you to get the impression that it is unique. Far from it. There are dozens of courses and development projects dealing with the same subject matter going on at universities and research centers all over the world. So many, in fact, that they have their own convention: New Interfaces for Musical Expression, or NIME, which met for the fifth time this past May in the beautiful city of Vancouver, British Columbia. And when I submitted a paper that described my course, they were kind enough to accept it, so I had no excuse but to go.

Vancouver is stunningly set between the island-filled Strait of Georgia and Canada’s Coastal Mountain range. Although real estate values are among the highest in the country, thanks in large part to an influx of people and capital from Hong Kong in the years preceding the Chinese takeover, it is still a very reasonably priced place for Americans, as the film industry well knows. I was there only once before — 22 years ago — and I’ve always wanted to go back.

I was there for another conference called Digicon ’83, which billed itself as the “First International Conference on the Digital Arts.” You probably never heard of it, but for everyone there, it was a life-changing experience: dozens of digital luminaires and soon-to-be luminaries in the same place talking about music, video, graphic arts and what would eventually become known as multimedia. I met Bob Moog, Herbie Hancock, Todd Rundgren and Andy Moorer, and saw for the first time toys and tools the likes of which would soon take their place at the center of the production world, such as Lucasfilm’s EditDroid and SoundDroid (which evolved into Avid’s Media Composer and Sonic Solutions, and ultimately the tools we all use today), some of the very first CGI systems used by Hollywood, a sampling instrument called the Fairlight CMI and something called MIDI.

Among the speakers at Digicon was a prescient Canadian composer and computer scientist named Bill Buxton, who said, “It’s time to rethink the interface between electronic instruments and the user. We have to get away from the ‘overblown electronic organ’ syndrome. If we want to expand the range of musical expression, why can’t we use new gestures — blowing, sucking, squeezing, kicking, caressing — instead of emulating the past?” He spoke, for the first time that I can recall, of devices he called “gesture controllers.”

Flash-forward to 2005, and gesture controllers are what NIME is all about. And Buxton, who had left the music world to work on projects such as Alias/Wavefront (which garnered him a Scientific and Engineering Academy Award), is back, delivering one of the keynote speeches and still preaching the same gospel.

At Digicon, Moorer opened his presentation about the SoundDroid with a slide of Moviola and the words, “This is the enemy.” I don’t know if Buxton remembered that, but he started his presentation exactly the same way — only the picture was of a Revox reel-to-reel deck. In the days when electronic music was brand new, he explained, concert performances would too often comprise someone walking onto a stage, pushing a button on a tape deck and walking off. But those days aren’t entirely over: People still do the same thing with laptops. “If you’re sitting at a computer and typing, why should I care?” he asked. Audiences at a live performance want to see someone doing something interesting to create the music, he said. “The goal of a performance system should be to make your audience understand the correlation of gesture and sound.”

The conference, which had about 180 attendees, comprised three intense days of papers, posters, concerts, demos and jam sessions, which were all about inventing new ways to make music and enhance expressiveness and creativity. “The computer mouse is about the narrowest straw you can suck all human expression through,” said another keynote speaker, Golan Levin of Carnegie Mellon University. “Music engages you in a creative activity that tells you something about yourself and is ‘sticky’ — you want to stay with it. The best musical instrument is one that is easy to learn and takes a lifetime to master.”

It would easily take a dozen columns to describe all of the clever hardware and inspiring performances that were presented at NIME, but I’ll try to titillate you with a few. For more, go to the conference’s Website (http://hct.ece.ubc.ca/nime/2005), where you can download all of the papers and posters and, by the time you read this, some pretty cool videos.

Ever wish you could turn a smile into a tune? Someone’s on it. A group from ATR Intelligent Robotics and Communication Laboratories in Kyoto, Japan, showed a system that could create music by changing one’s facial expressions. A camera image of the face is divided into seven zones and a computer continuously tracks changes in the image, so that any change in any of the zones triggers particular MIDI notes. Other parameters are controlled by the amount of displacement of the image.

How about playing a song using an eraser? Try the Scrubber from a team at the Media Lab Europe in Dublin (which unfortunately closed the day after the group submitted their paper). It’s based around an ordinary white-board eraser fitted with two tiny microphones and a force-sensing resistor. The system analyzes the sound created by the eraser on whatever surface it is being rubbed and applies it to a wavetable or sample, playing it in granular fashion. The rubbing signal controls the sample’s speed, volume or other parameters, and the system can even detect whether the eraser is going backward or forward. The presenters showed how it can work on a sample of a garage door closing — stretching, compressing and changing directions; the applications for sound effects work were pretty clear.

Maybe you’ve been listening to those pentatonic wind chimes on your porch and thinking about how to get them to do something different for a change. A team from New York University showed a system called Swayaway, which may be the answer. Seventeen vertical plastic stalks, each with a wooden weight at the top, are attached to a base containing flex sensors that send out MIDI controller information. At the same time, ambient sound is continuously picked up by a microphone, analyzed and processed by Max/MSP software and converted to MIDI. The combined data control a synth module with a number of pre-programmed sounds that literally change with the wind.

For the headbanging set, a group from Aachen, Germany, exhibited Bangarama. Talk about a low-budget instrument: This comprises a piece of plywood cut vaguely into the shape of a Flying V guitar, with 13 pairs of aluminum foil squares running up the neck. The player wraps aluminum foil around one of his fingers so that moving along the neck shorts out a pair of foil pieces and closes a circuit, which then sets up a heavy-metal guitar sample. This player also wore a baseball cap onto which was attached a contraption containing a coin mounted on a pivot. When he swung his head forward, the coin came in contact with another piece of metal and that circuit closing triggered the sample. Even something this simple has potential problems: The designers found that the coin would sometimes bounce on the downswing, creating multiple triggers, so they put in a timer that would ignore new notes that were less than 250 ms later than previous notes. They justified their modification by explaining, “We predicted that typical users would not headbang faster than 240 beats per minute.”

A team from Stanford University’s CCRMA showed Beat Boxing — a pair of lightweight boxing gloves equipped with force sensors and accelerometers — hooked up to play percussion samples and loops with each punch and jab.

Roger Dannenberg of Carnegie Mellon University, who has been an important force in music software development for many years, showed a rather different side of himself with McBlare, a MIDI-controlled Highland bagpipe that can be played many times faster than it would in the hands of an actual Scotsman — and just as loud. It had its world premiere performance, he reported, at the recent graduation exercises of the university’s School of Computer Science.

Perry Cook of Princeton University showed his devices for generating speech sounds based on wind-driven instruments, but not in the way you might think. Lisa is a modified accordion, with the keys controlling pitch, the buttons specifying phonemes and various flex and pressure sensors doing real-time modifications. Maggie (where ever did he get those names?) is based on a concertina — much simpler, but still very versatile — with built-in speakers to improve the illusion that it’s (she’s?) actually singing. Cook also had a contraption that looked like it had crawled out of the wreckage from an accident involving a small MIDI keyboard and a Hohner Melodica — which you might remember was a popular music toy some years ago — played by blowing in one end while fingering a keyboard sticking out in front of one’s face. Cook calls his instrument the Voice-Oriented Melodica Interface Device, or VOMID.

Some presenters talked about participatory works, where the audience is the performance. In Levin’s “Dialtones: A Telesymphony,” which was first performed in 2001 in Austria, about 200 audience members with cell phones are given specific ringtones and seat placements in the hall before the concert. A computer then plays the piece by dialing the phones’ numbers in a pre-programmed order and switching on spotlights in the ceiling that point to where each phone is ringing. “There’s an accuracy of plus or minus about a half second,” Levin admitted.

There was also plenty of serious work and heavy-duty musicianship on display. Dan Overholt of the University of California, Santa Barbara talked about and performed with his Overtone Violin, a six-string solid-body violin with optical pickups, along with several knobs, a keypad, two sliders and a joystick. There’s also a miniature video camera mounted at the top of the neck and ultrasonic sensors and accelerometers in the instrument and in a glove worn on the bow hand. It’s all hooked up to a wireless USB system so there are no cables to trip over.

French-born composer Laetitia Sonami did a piece with a performance system called the Lady’s Glove, which she helped develop at Amsterdam reseach center STEIM and has worked with for 15 years. It contains a formidable variety of touch, motion and electrical sensors, allowing for control of up to 30 simultaneous musical parameters. Her performance was a wonderfully expressive mix of concrete and abstract sounds, and was a great example of what can result when a performer has enough time to become a virtuoso on a new instrument.

Another virtuoso was Japanese composer and researcher Yoichi Nagashima, whose Wriggle Screamer II has two sensing systems. One looks like an empty picture frame divided up into a 13×3 grid by optical beams. As he put his hands and wrists through the frame, he produced sounds of harps, bells, strings, percussion and orchestral hits, depending on whether he was slapping, poking, caressing or karate-chopping the space. The second system uses muscle (EMG) sensors in his arms and legs to provide more complex control over the sounds, which he exercised by wriggling his fingers, rotating his wrists and elbows, and kicking his feet.

A number of the presentations and performances involved dancers and how their highly disciplined body movements can be used to make music. Luke DuBois of New York University (and also the author of Cycling ’74’s Jitter software) did a fascinating collaboration with Turkish dancer Beliz Demircioglu, a graduate student at NYU. While DuBois held a video camera, Demircioglu athletically moved around the stage and a stylized image of her was projected on a video screen. At the same time, motion-capture software analyzed the dancer’s position in space and looked for certain gestures that it was trained to recognize. It then translated that information into various musical parameters. It was an evocative performance and was true to Buxton’s vision, as the connection between the physical gesture and the music was very clear.

Bringing down the house at one of the concerts was a performance by Giorgio Magnanensi, an Italian-born composer who now lives in Vancouver. Seated at the front-of-house console in the middle of the audience, Magnanensi frantically manipulated a dozen modified electronic toys of the old Speak&Spell variety, creating a ghastly, hysterical landscape of screaming, ripping “circuit-bent” sounds, made all the more hideous by the sounds’ roots in (badly sampled) human voices, thrown around the hall by a 16-channel digital matrix mixer. At one point, a Furby with a small speaker in its stomach stopped working and the composer ripped a wire-filled twist tie in his teeth and shoved it into the critter’s back, which brought it back to life. He told me later a solder joint had failed and he knew exactly where it was.

There was much more. Two presentations involved a Lemur, but they actually had nothing to do with each other. One Lemur was a new combination touch surface and LCD made by French company JazzMutant. (The product should be available in the U.S. from Cycling ’74 by the time you read this.) Users can set up buttons, faders, knobs and moving colored balls on the 12-inch surface, which responds to multiple fingers simultaneously. It’s very impressive and looks like it could be a great help in any digital studio, but it’s not cheap — the cost is about the same as two Mackie control surfaces.

The other is an installation project comprising an army of MIDI-controlled mechanical instruments that are equipped with small motors, cams and various striking and spinning devices that vibrate, bang, scrape, pluck, slide and rattle. The creators of this menagerie call themselves the League of Electronic Musical Urban Robots (LEMUR).

Several speakers bemoaned how difficult it is for new electronic instruments to survive in the marketplace, despite the amount of ingenuity, inspiration and just plain hard work going on in the “alternative controllers” world that was evident at NIME. But one presenter gave a potentially bright picture for designers of these devices, thanks to the growth of other forms of electronic entertainment.

Tina “Bean” Blaine — percussionist, vocalist, inventor, “musical interactivist” at Interval Research and now professor at Carnegie Mellon — gave a talk on “The Convergence of Alternate Controllers and Musical Interfaces in Interactive Entertainment.” A lot of the future of music is in games, she said — not just composing for them, but designing gadgets to use with them. From Dance Dance Revolution to Donkey Konga and Groove, many successful music-oriented games depend on unique hardware interfaces. And the game companies don’t develop them, they license them. So who better to create those interfaces than the tinkerers and solder jockeys at NIME? “Everything you’re doing,” Blaine told the attendeees, “the game industry can use.”

So once again, a trip to Vancouver has left me in awe, both of the beauty of the place and of the concepts I encountered. I left the conference with all sorts of new ideas for my own course and my own music ringing in my head.

Perhaps what comes out of NIME will not be as earthshaking as what I saw at Digicon ’83, but it eventually may be just as influential, in more subtle ways, as the science of computer/human interface design becomes more important in all of our activities. Better still, NIME is now established and will happen every year, so more and more people will be exposed to this exciting field. NIME ’06 is going to be at IRCAM in Paris. I wonder whether I can come up with another idea for a paper?

Paul D. Lehrman started piano lessons at age five, got his first soldering-iron burn at nine, sang in his first opera at 11 and bought his first electric guitar at 12. He’s still collecting, singing and occasionally getting burned.

Close