Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Understanding Integration

MORE BASICS OF VIDEOGAME SOUND DEVELOPMENT

3D Studio Max Track lets you see characters from various perspectives.

Way back in the December 2007 issue of Mix, we scratched the surface of game sound integration techniques. (See “AudioNext: Roll Your Own Level.”) We explored the concepts of asset preparation and rudimentary level construction, which I hope helped you to incorporate a simple ambient sound. This time, we’ll dig a little deeper and introduce concepts for other game-specific classes of audio.

ANIMATIONS

Animation is simply a term for things that have moving parts. A box can move if you were to throw it off of a cliff, but that doesn’t mean that it actually animates. An easy concept to think of with animation is a human who is walking or running. The question becomes, how do you hook it up?

There are two elements to consider when attaching a sound to an animation: the frame of the animation and what, if anything, the animation interacts with. As an example, consider footsteps. For the walk-and-run cycle of a standard adult human male, footsteps will vary widely. Typically, there is more scuffing when you run and crisper attack when you walk. There are also different surfaces to walk on, such as wood, carpet or concrete. In an editor such as the Neverwinter Nights 2 Toolset, there is such a thing as an Animation viewer, where you can observe various characters’ walk, run and attack animations.

Let’s place a sound on an animation. Download a trial copy of Autodesk’s 3D Studio Max, one of the most widely used animation tools in the game industry, from usa.autodesk.com/adsk/servlet/mform?id=10083915&siteID=123112. Once you’ve done that, download a character rig from www.highend3d.com/3-Dsmax/downloads/character_rigs/4566.html. This will allow you to view a human figure. Now open the Track view.

From here, you can add keyframes; each keyframe can trigger a sound. This should provide an idea of how the animation-hookup system operates in many commercial game engines. Sometimes a keyframe may be known as an “annotation” or a “notification,” but essentially you’re making something happen when that frame hits. The complex consideration is material assignments, which are usually done automatically. For example, you have a set of footstep sounds for concrete, wood and carpet. On each material in the game world, a material property is assigned for footstep impacts. In whichever game engine’s sound bank manager you use (FMOD, Wwise, etc.), there should also be a property that has the same material entries, and when you assign the material in both of these cases properly the footsteps just “work.”

You can download an evaluation of the Gamebryo game engine from www.emergent.net/en/Products/Gamebryo, as well as an evaluation copy of Wwise (www.audiokinetic.com/4105/try-wwise-now.asp), which also has a 3D Studio Max plug-in. All of the integration tools will be at your disposal — and not just for animation. But before we move onward, let’s delve into the difference between 2-D and 3-D sounds.

3-D SOUNDS

Most non-game studio junkies consider a 3-D sound to be something that’s panned across multiple channels in a multichannel file of some sort, be it a multichannel WAV or, once compressed, an AC3 file. In games, however, a 3-D sound is mono.

This is because the sound will be panned in real time across any channels that the player has hooked up (up to 7.1 on the PlayStation 3, for example). This panning can take place relative to either the player model or the camera, and in a first-person perspective game the camera and the player are one and the same. This is certainly nifty, but it presents a number of complex considerations outside the normal linear formats when it comes to real-time mixing.

DEFINING RADIUS

A 3-D positional sound comprises an inner and outer radius, measured in units that your engine uses (meters, feet, etc.). Each sound must have its radius defined properly based on your game design mandates. If you’re bucking for utter realism, an airplane flying nearby overhead would easily drown out other sounds. However, if you want to go for a dramatic moment so that a plane flying overhead wouldn’t have such an effect and you want music to take over, then you can either make the radius smaller or you can create a ducking group that will immediately lower the volume of the plane when its radius reaches the player.

In an area where a lot of sounds take place, using a few techniques such as compression and limiting in real time (as was done in Halo 3) is also necessary to avoid slamming your headroom too hard. Games don’t have to join the loudness war.

OCCLUSION AND OBSTRUCTION

The concepts of occlusion and obstruction relate to the sonic changes that occur when there’s something between the player and the sound source. Consider the airplane that just flew overhead. Creating the proper sound image once we’re inside the plane would require using a lowpass filter and a bit of overall attenuation. Sounds familiar, but doing this in real time requires something called a raycast or linecast. This equates to the game engine drawing an invisible line from the sound source to you and ascertaining what’s between these two points.

Occlusion and obstruction can cause processing headaches for your programming team, even on next-gen games. One possible work-around is to set up all the volumes that require occlusion/obstruction yourself. It’s more work, but also allows more control. How do you do this?

Everything depends on your game engine. Say you use the Unreal Engine to create your level. It’s possible to create volumes in the Unreal editor, and you can assign occlusion values to these volumes. Volumes take the shape of spheres, cubes — just about any primitive you want — and you can specify the kind of occlusion (how much LPF, how much volume should be cut, etc.) to be applied to these volumes.

REAL-TIME EFFECTS

Real-time effects were used in Halo 3 to achieve EQ and lowpass filtering with Waves technology. This allowed radio effects to be used on voice-over files without the need for a duplicate file. Quite an achievement, but how else can we use real-time effects?

The next phase of real-time effects is represented by examples such as applying a Lexicon or Waves reverb in real time as you walk into a room. If you’ve defined occlusion and obstruction, chances are these volumes can double as “reverb zones.” You may have heard of EAX 4 (Enhanced Audio Effects) functionality from Creative Labs, the creators of the SoundBlaster soundcard line and parent company of E-mu Systems. Working entirely in software, EAX can allow reverb crossfades through different zones and pre-loading other zones so you — the game character — can stand in a small hallway area with a “plate” reverb effect while firing a shot into a heavily reverberant room and hear that reverb.

HUGE RESULTS FROM SMALL RESOURCES

A lot of folks — myself included — might insist that creating an entire soundscape in real time is the goal, yet that scenario is rarely the reality. Here are some proven, pro techniques for getting the most from what are often limited resources.

If you have an immersive, rich landscape with dozens of sounds of varying lengths (a blustering wind, an ocean, birds chirping, a distant foghorn, planes flying overhead), processing all those sounds just might put too much strain on your channel count. In lieu of this, playing a multichannel surround file might alleviate the problem and generate the more intense experience that you’d prefer the listener to hear rather than leave more to chance in a real-time mix. And to help you along, numerous vendors offer surround ambience collections, with walla and sonic backgrounds in a variety of sound effects packages suited to the needs of the game or film designer. For details on many of these, check out “Total Immersion Effects” in the February 2008 issue of Mix.

TOOLBOXES

Imagine a game in which you had 500 unique attack sounds per character, which isn’t out of the ordinary. I wouldn’t be surprised if Capcom’s highly anticipated Street Fighter IV had this amount. But say you have a fighting game with more than two characters onscreen at once (say, four, as in Gauntlet) and each character has a few hundred moves, each with its own set of visual effects and sound needs. One way to circumvent this is through toolboxes: batches of sounds that can be used for multiple situations.

In the case of melee attacks, for example, you can use a set of small, medium and large whoosh sounds, as well as a whole slew of exertions (guttural “hah!” and “rgh!” grunts that go with the territory of melee attacks). For the folks on the receiving end of the attacks, a set of impacts can be used. Here, a couple dozen toolboxes spread across categories of impact (a simple example being “small,” “medium” and “large”) and the use of a good randomization scheme should ensure that there’s no repetition. This will save precious channels and memory while allowing you to focus on the all-important “money shot” sounds.

FAREWELL, MY FRIENDS

There is far more to be explored in the world of game audio integration, but other duties have called me away from writing “AudioNext.” So I must bid a grateful farewell to Mix. It’s been an honor to have written for the best-known magazine in audio production, and I wish the best for everyone reading this. Hopefully, I’ll see some of you at GDC next year!

Alexander Brandon is the audio director at Obsidian Entertainment.

Close