Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now


Double Major


Illustration: Ben Fishman

Ten years ago, it might have been the exception, but today, it’s the rule: Schools that have courses in professional audio production are now also offering courses in visual media. Institutions that once boasted how many SSL consoles and Pro Tools stations they had are now touting their expertise in graphics, animation, advertising art, film and video editing, and Web programming.

This phenomenon is particularly true of commercial schools, but many more traditional institutions are moving along the same lines. It’s in the nature of a large university that new courses and departments can’t be created overnight, but faculty can encourage students who are majoring in one discipline to look into courses in other departments. In fact, many schools are creating cross-disciplinary certificates or minor programs that require students to work in multiple artistic fields.

In my own school, there is such a program, now almost 10 years old. To fulfill its requirements, students can choose among courses in art, art history, drama, dance, electrical and computer engineering, computer science, mechanical engineering, film, journalism and music. Students will do a “capstone” project, which can be a film, a magazine, an interactive Website, a sculpture, an installation, a CD or any number of other forms as long as it incorporates elements from diverse media. There are a couple of required courses, but students, generally speaking, can take whatever courses they like within the program’s guidelines and have the opportunity for independent-study projects with appropriate faculty.

This is a really good thing. It’s safe to say that the majority of audio professionals will be involved with visual media at some point in their careers, whether they’re editing dialog, music or effects for motion pictures; doing sound design for the Web; creating soundtracks for games; or working in some new forms that have yet to be invented. Therefore, it’s extremely helpful to anyone entering the audio field to understand how visuals are created and manipulated so that they can talk to their colleagues and clients. Just as recording engineers, mixers and editors need to know the language of music to communicate with the folks in front of the microphones, they also need to know what’s involved in generating graphics and animations, editing video and building the intricate structures of game design if they are going to be able to communicate with the people they collaborate with in those fields.

George Massenburg — who, with his plethora of production, design, consulting and education credits, can safely be called one of the most versatile and successful people in pro audio — is a great example of someone who has been stretching his horizons in the visual direction. These days, at every music session he records, he also arranges to shoot HD video. “I’ve given handheld cameras to young writer/directors who know what to do, and even more importantly what not to do, around musicians,” he reports. What ultimately happens to these videos he’s not really sure, but in the short run, they are an important part of the act’s electronic press kit.

Certainly, the advent of cheap hardware and software for digitizing, editing and authoring video has raised many young people’s consciousness about how they can create video. In turn, students are demanding that their schools give them a chance to learn these skills.

Can someone be equally talented and skillful in audio and video, and should schools be selling their curricula based on that assumption? To me, some programs that marry audio and video education, with the idea that their graduates will be able to go into either field with equal ease, might be guilty of raising false hopes. That’s because the skill sets, and indeed the brain functions, needed to work with visuals are quite different from those that are used in working with audio.

Massenburg is doing a lot of his own video production, but that seems to be more due to the inadequacy of others in the field than a strong desire to be a video editor. “I produce a [recording] session only when I can’t find anyone who appreciates what the artist is trying to do,” he says, “and it’s the same thing with video, except that entire industry is far less competent around music, which means I have to do the editing myself.” In the two schools where he is on the faculty, McGill University in Montréal and Berklee College in Boston, “We’re just starting to teach video technology and production methodology, and I think we’re better off addressing both audio and video in a coordinated manner. But then again, we’re just talking about music,” as opposed to narrative, persuasive or documentary filmmaking.

For a musician or audio student to learn enough about video to edit music videos is not that hard. I know; I’ve recently done it myself. I don’t generally consider myself a visual person, although I’ve done my share of advertising design, book layout and Websites, but last year, I taught myself how to edit video (to be totally honest, I started with a couple of lessons from one of my former students) using first iMovie and then Final Cut Pro. As someone who’s been using Pro Tools since Version 1, I found the user interfaces of these tools generally pretty easy to understand, if occasionally frustrating.

I had a couple of videos of concerts I had produced with multiple cameras, along with a bunch of interviews that were destined for the “extras” on a DVD release with a tight schedule. Because I knew the music, editing the concert footage was fairly straightforward: Follow the lines of the various instruments and find a camera that was shooting whatever was making the most important sound at each moment, cutting more or less in tempo with the music. When there was no usable close-up shot, go for a wide angle. The process of editing the interviews was equally pragmatic: I had the texts all printed out, so I simply cut according to which parts of the text I wanted to use. As they were all one-camera shoots, I just used dissolves over the cuts. Not a lot of aesthetic choices here.

These videos came out fine, they augment the package nicely and people have responded well to them. But being able to cut a couple of concert videos doesn’t make me Martin Scorsese or Albert Maysles, or even Albert Brooks. I’m just using my musical skills and knowledge and adapting them to the images. The language I was relying on was the language of sound, not of vision.

I now have a greater appreciation for what filmmakers do, and that has helped quite a bit with projects I’ve done since. I can communicate more effectively and efficiently with directors and editors. But I also know very well that I could never do what they do.

Daniel Levitin, the McGill University music and psychology professor whose work with the Boston Symphony Orchestra I talked about in my July 2006 column, agrees with me that working with visuals and working with music are two very different animals. In his fascinating new book, This Is Your Brain on Music, he goes into detail about how incredibly complex the connections are between our ears and our brains, and about how many different processes we use, unconsciously and in parallel, to interpret what we hear. “Music composition and editing are fundamentally different from video ‘composition’ and editing,” he explains. “They certainly invoke different brain processes. Anything involving audio will entail the temporal lobes, anything involving video will entail the occipital lobe and parietal regions, and there is almost no overlap of function there.” Visual perception is equally complex but, as Levitin says, uses completely different mechanisms.

There is a “widespread anecdotal observation,” he goes on, “that there are ‘film’ people and ‘music’ people, and very few who can do both. I’m not a very visual person, and I like to think I’m pretty good with sound. I know people who are the opposite.”

So why are so many audio institutions expanding their curricula? For some, the reason is financial: Digital video production is hot, the way digital audio was a decade ago, and to attract today’s students, you gotta give them what they want. That’s legitimate, but let me propose the radical notion that what students, and the industry, really need is the exact opposite: Schools that teach video need to expend a lot more effort in making students understand the role of good sound and how to get it.

Any audio person who’s worked with video has encountered the attitude that sound is no more than “the noise coming out of the back of the box.” Budgets and schedules too often are skewed toward visuals, with rarely enough money or time to do a good job with sound. Despite the growing understanding among filmmakers that audiences will tolerate bad picture far more easily than they will tolerate bad sound, many producers and executive producers don’t get it. It’s not something that audiences understand naturally. When most people watch movies, it’s the actors, the glitzy visuals and the special effects that get their attention, and even though the sound has a huge effect on how they respond to the film, they’re rarely aware of it.

So this has to be taught to any student interested in producing multimedia. Otherwise, they will find that their films will turn off audiences because the level of their soundtracks jump all over the place, or the audio is noisy or is recorded off-mic, or the music drowns out the dialog.

So, to you schools that are expanding in the area of visuals, I say go right ahead. But keep in mind what’s important and what the students won’t be getting by themselves. And remember that after they leave you, they will have to learn how to deal with new types of visual media. But whether it’s analog or digital, two channels or 11, sound will always have to sound good.

Paul D. Lehrman’s new book of his collected works for Mix, plus a couple dozen pages of jokes, is The Insider Audio Bathroom Reader, published by Thomson Course Technology PTR.