Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now


Game Characters Speak Up!


Sony Computer Entertainment America’s David Murrant (L) and Greg deBeer

photo: Erik Buensuceso

It wasn’t so long ago that speaking roles in videogames comprised a few repetitive words, a grunt or maybe an oomph for effect. But as games have become more sophisticated, the landscape has widened to include more story-driven titles — conversations meld with action; no longer just point, shoot and move on to the next victim.

Gamers expect more than that now, and the market’s increasing budgets and advanced technology have allowed developers to deliver. For dialog, that means higher standards in terms of voice-over talent, as well as recording equipment, facilities and technique. “Dialog now is telling the story — the backstory — and has become the action of the game itself,” says David Murrant, sound design manager for Sony Computer Entertainment America. “You’ve got actors [who require large salaries] involved, and not just to do voice-over, but for doing motion-capture and voice at the same time.”

At the turn of the new millennium, many starring-role actors turned up their noses at the idea of reciting dialog for a videogame. But the tides turned when videogame profits began eclipsing box-office profits. “Game developers used a lot more sound-alikes in film or TV series — based shows; you wouldn’t get the real actors,” says Tor Kingdon, who, with Dave Atherton, recorded CSI: Dimensions of Murder at Margarita Mix Hollywood. “That doesn’t happen as much now. The gamers expect to hear the same people they hear on CSI every week. They understand the difference between someone who imitates that voice and someone who is that voice.”

As the consumers demanded higher quality, naturally, so did developers. “With every new generation of hardware and every new development cycle, dialog is playing a bigger role,” says Greg deBeer, dialog coordinator for Sony Computer Entertainment America. “The size of the scripts is getting bigger, and the level of professionalism is way up. It used to be that we had to have ‘Joe from accounting’ do half the voices in our game. But now, everybody has raised the bar and expects quality acting and implementation.” No more recording in the back office on the laptop, that’s for sure.

The Sony team records most of their material in Los Angeles — area studios due to the area’s high concentration of actors and voice-over talent, or at their Foster City, Calif., and San Diego, Calif., studios. But deBeer emphasizes that they’ll travel to the other end of the world if necessary. “When we recorded [Rise to Honor star] Jet Li, he was on set in Hong Kong, so we went to him,” he says.

For SOCOM 3: US NAVY SEALs, the casting agent and director went to great lengths to find individuals who spoke fluent Russian and dialects from its most remote areas. “We’d talk to people on the streets, people working in cafés, restaurants. You really get to know the community,” Murrant says.

Most of the time, however, a casting director finds the appropriate voices for the job. “The God of War cast was primarily warriors, gods and demigods,” says deBeer, “so all of those voices needed to have a lot of weight behind them. The main character was a mortal, but the mortal had to sound strong enough to beat the Gods, so he needed to have those qualities in his voice to make the story convincing.”

Next comes the often tedious recording process, which can last anywhere from three weeks to three months, and can result in tens of thousands of lines of dialog. Both Murrant and Kingdon work in Pro Tools, using minimal processing on the front end, save for a high-quality preamp and microphone. “We want to get the data raw so we can do what we need to do with it later,” says Murrant.

It’s crucial to keep detailed records during this phase. “When [an actor] comes back three months later, they need to sound exactly as they did the first time,” says Kingdon. “If the mic is two inches off to the left or right, it’s going to sound very different. So I’ll take detailed notes of where I put the mic, what settings I used on the mic pre and in Pro Tools so that I can reproduce that sound later.”

Excellent organizational skills become even more crucial as audio files get passed on to the editors and for localization. “The best tool we have is a solid and well-thought-out naming convention,” says deBeer. “Every line in our script gets a file name and is put into a folder structure, usually organized by character name, and also put into spreadsheet format.” Adds Murrant, “If we’re localizing it to five or six other territories and we don’t have all of those files named properly, who knows what we’ll get back?”

Whatever the language, quality will only improve with next-generation platforms. In addition to features such as 48kHz/16-bit audio capability, the new platforms will allow engineers to mix in real time. “Often, the same line of dialog could be triggered in a variety of different locations, but you don’t know where it’s going to happen,” says deBeer.

“With more onboard DSP, we’ll be able to handle these variable situations with a lot more finesse,” Murrant adds. “We also hope to achieve 3-D placement, so if you’re listening to a conversation behind a door and then you open the door, the filter opens and you can hear it as it was originally recorded. It’s an absolutely amazing opportunity, and I don’t think there’s a single sound designer out there who would disagree.”