Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

The Secrets of NIME

A PEEK AT TOMORROW'S MUSICAL INTERFACES

There’s an old adage in chemistry: Once you find the “perfect solvent” — a substance that can dissolve anything — what do you keep it in? In audio (as every AES show makes abundantly clear), we have a parallel problem: Once you find the perfect audio processor — a device that can do absolutely anything — how do you control it?

In fact, we’ve had the potentially “perfect” audio processor ever since the first digital audio products were introduced 30 years ago. Since then, the chips keep getting better and more capable, and the algorithms to make them do cool things have just been getting cooler. But developing the interfaces to make the best possible use of these tools has lagged behind. The mouse-and-windows paradigm, we all agree, is woefully inadequate for doing complex, real-time control of sound creation and processing, and yet it’s still the most common way we work with audio in the digital domain. Dedicated control surfaces have helped to make chores like mixing more ergonomic, but these aren’t so much revolutionary as they are retro, harkening back to an era when the hardware forced us to think in terms of discrete tracks and channels. Comfortable, but not exactly cutting-edge. I mean, how revolutionary can a piece of gear called Big Knob be?

Some visionaries have proposed control interfaces that let us work in virtual space, or on an infinitely reconfigurable touch surface, or with multidimensional doohickeys that give us expandable degrees of freedom. But where are the people and companies that can make these a reality?

Actually, a whole lot of them were in New York a few months ago. When it comes to looking at the future of real-time control over sound in the digital domain, there’s no better place than the experimental musical instrument community, which meets annually at the New Interfaces for Musical Expression (NIME) conference. I wrote about NIME two years ago (September 2005 issue) when it met in Vancouver, but so much has happened in the world of musical interfaces, and so many more people and institutions are getting involved, that it’s worth another look.

This year’s conference was at New York University, right in the heart of Greenwich Village, and was a joint effort between the university’s music technology program, the nonprofit educational collaborative Harvestworks Digital Media Arts Center and The League of Electronic Musical Urban Robots (LEMUR), a Brooklyn-based group that does amazing things with, as you might expect, robots that play music. (Disclosure: Since meeting LEMUR director Eric Singer at NIME 2005, I have worked on a large installation with the group, and we are currently in the planning stages for another installation later this year.)

Three very full days of meetings were held in the auditorium at the university’s Stern School of Business, with concerts in the evening across the street at the Frederick Loewe Theater. After hours, there were installations at a gallery in Chelsea and club performances in Brooklyn, some of which went all night. A fourth day of installations, demos and performances was at the Computer Music Center at Columbia University, way uptown on W. 125th Street (home of the world’s first synthesizer, the RCA Mark II, which is in the process of being restored).

NIME is a relatively small conference — there were some 250 participants, although many more came to the performance events — and unlike AES, where there are many simultaneous events from which to choose, there was essentially only one “track” at NIME. But there was still too much going on for one person to take in.

Almost all of the presenters were from academia, and one thing most academic institutions have in common is that they don’t have a lot of money, so most of the presentations involved inexpensive technology. But there’s a lot more interesting stuff in that category than there used to be: the price of things like accelerometers (which detect not only motion, but also tilt as they can respond to the earth’s gravity), video cameras, wireless transceivers, haptic devices that provide touch- and force-feedback to the user — even eyeball-tracking systems have come down dramatically in price in just the past few years.

A team from McGill University in Montreal showed the T-Stick, an instrument based on capacitive touch and charge-transfer proximity sensors, along with accelerometers and a piezo contact mic, all built into a 4-foot plastic pipe. The user can play this instrument by fingering, rubbing, twisting, jarring, shaking and swinging it. T-Stick can distinguish between the touch of a single finger and a whole hand, and every different gesture can be used for a different sound or musical parameter. All fun, all new, but it brings up a common problem that the group is continually addressing: When someone puts in the time and effort to learn how to play a new instrument, the skills he or she picks up aren’t necessarily going to be of any use with the next new instrument. So the McGill team plans to build a “family” of T-Sticks of different sizes to be played, like the instruments in a conventional string orchestra, using similar techniques but in different positions relative to the body.

Another multimode instrument is the PHYSMISM from Aalborg University in Copenhagen, Denmark. The device, which looks like it would have made a great remote control for R2-D2, is designed to take advantage of physical modeling, a terrific synthesis technology that has never caught on the way many people (including me) think it should have, in part because no one has been able to come up with a physical interface that can take advantage of everything it has to offer. By combining a number of sensor technologies in one package, the PHYSMISM goes a long way toward that goal. Along with four knobs for choosing and setting system parameters, the device has a breath sensor that uses a dynamo (a small magnet-based electrical generator, typically used to power the lights on a bicycle) attached to a fan blade at the end of a tube; a rubbing sensor that combines two slide potentiometers; and a pressure sensor, two force-sensitive drum pads and a crank, also attached to a dynamo. Each of the input devices corresponds to a particular aspect of the physical model being played, and they do so in a clever and surprisingly intuitive fashion, so that with just a little practice, virtually anyone can come up with a useful range of sounds.

Graphics tablets are occasionally found in sound designers’ tool boxes, but they are rarely used for real-time musical applications. Researchers at the Faculté Polytechnique of Mons, Belgium, have a solution for this in the HandSketch Bi-Manual Controller: Add fingertip-sized touch sensors for the nondominant hand and mount the whole thing vertically so you can play it like a washboard or perhaps an accordion. The graphics tablet is mapped in an arc so that it follows the natural motion of the forearm as it pivots at the elbow. Large movements change the pitch, small movements control vibrato, and lifting the pen stops the sound. So what does it do? It sings. The developers have perfected an excellent human voice model for which the instrument is eminently well suited. The touch sensors under the fingers alter the pitch in discrete steps, like frets on a guitar, while the thumb is used to control timbre, scale and various aspects of articulation. I got a chance to play the thing in one of the demo sessions and found it really easy to make lovely sounds.

Electronic percussion instruments since the era of Simmons drums have been pretty limited affairs: You hit something and it makes a noise, which, at best, varies depending on how hard you hit it. Some devices add position sensing, but none of them can come close to providing the expressiveness of even a single real drum played by a real drummer. But what if you could devise a controller that would know not only how hard and where you hit it, but what you hit it with, what angle the stick was at and when you damped it? Building a system out of standard electronic sensors to handle all that would be pretty hard, so MIT Media Lab researcher Roberto Aimi decided that he’d use a simple source of data that’s already information-rich: an audio signal. His paper described using small, inexpensive, damped percussion instruments to excite convolution filters of much more complex percussion instruments.

For example, a cheap cymbal equipped with a piece of piezoelectric plastic, a layer of foam to partially dampen the sound and a force-sensing resistor to sense when it’s being choked can sound like a 5-foot gong when its signal is run through a convolution filter derived from a real gong. The filter will respond differently, depending on whether the cymbal is hit with a hard mallet, soft mallet, wire brush or a hand, and whether it’s struck on the edge, the midpoint or on the bell. Any input device can trigger any convolution filter, and all of the parameters in the filter — pitch, envelope, EQ, reverb, etc. — are adjustable, so the range of possible sounds is enormous and very realistic. Aimi demonstrated a variety of drums and pads, and even showed how he equipped a pair of brushes with sensors and wireless transmitters so that they could trigger sounds all by themselves. The models Aimi played sounded fabulous; some manufacturer would be doing itself a big favor if it were to pick up this technology.

Another MIT Media Lab project was Zstretch, which uses fabric to generate musical parameters. By stretching a rectangular chunk of cloth in different directions, you can, for example, control the balance between two loops, their speed and their pitch. Each side of the rectangle has a resistive-strain sensor sewn into it. After some experimentation, the developers found that a two-point stitch done with a sewing machine gives the greatest dynamic range. Cloth is a natural sort of interface because we’re so used to manipulating it, and Zstretch can be grasped, scrunched, twisted, tugged, yanked, etc., and can give interesting musical results in several dimensions at once while providing touch-based feedback that feels familiar. The biggest problem was that over time, the fabric would stretch out to the point where the sensors provided much less data. And I imagine putting it in the laundry wouldn’t be a particularly smart thing to do.

At the 2005 NIME, Tina “Bean” Blaine gave a talk encouraging musical interface designers to look at the videogame industry as possible customers for their ideas. But on the last day of this year’s conference, in front of a small gathering at Columbia, two Berklee College of Music undergrads, Matt Nolan and Andrew Beck, demonstrated the opposite: how to use Nintendo’s inexpensive but sophisticated Wii Remote (Wiimote) and Nunchuck controller to make music.

The Wii system’s remotes, as anyone who hasn’t been living in a cave for the past year or so knows, use nonproprietary Bluetooth technology and can interface with any computer that supports the standard. Used together, the two devices provide 14 buttons, a joystick, two three-axis accelerometers, an infrared sensor, four LEDs, a Rumble motor to provide vibration feedback and even a speaker — all for about $60. Nolan and Beck’s contributions are conceptually straightforward, but absolutely essential: They have written drivers that interface the controllers with Cycling ’74’s Max/MSP audio processing and Jitter image-processing software, as well as the music programming language Csound. As Nolan puts it, “One person might use it to control their set in Ableton Live and another might use it to mix a video projection from a number of live camera feeds. The only limitations for using the Wiimote as a controller are your creativity, programming experience and the number of hours in a day. The experience from a relatively inexpensive device is in real time, and most of all, it is fun.”

Not everything at NIME was on the cheap, however. In the “We could do this if we had a contract with the Office of Naval Research, too” category, Anthony Hornof of the University of Oregon performed a simple but highly engaging piece called “EyeMusic v1.0” at the first evening’s concert. Hornof stood on the stage with a commercial eye-tracking system called EyeGaze (about $15k), which was hooked up to a computer running Jitter and Max/MSP. Projected on a screen was the image of his moving eyeball, along with various geometric objects. As he “looked” at the different objects, musical notes and patterns started, stopped and were modified. Every time he blinked, there was a loud crash, and the visuals and sounds changed. The process was wide open for everyone to see, but the music still managed to be surprising and, yes, fun.

Similar systems that cost far less than the EyeGaze are out there, and we’ll no doubt see them used by musicians and video artists before long — in fact, a team from the University of Wollongong, Australia, showed a homebrew eye-tracking system made from a stock miniature FireWire camera and an infrared LED and mirror. From there, it’s only a short step to the 3-D virtual mixing console that would make us all drool.

Next year’s NIME will be hosted by the Infomus Lab at the University of Genova, Italy. I probably won’t be able to attend, but maybe you can, or maybe you’re willing to wait until the year after when it’s likely to come back to North America. But if you’ve got a hankering to see what the next generation of sound-creation and -manipulation tools will look and sound like, I urge you to check out this remarkable gathering.

Paul Lehrman is coordinator of music technology at Tufts University. Go forward into the past with him in The Insider Audio Bathroom Reader (Thomson Course Publishing) available at
www.mixbooks.com.

Close