With sales of more than 2 million copies worldwide, Ubisoft’s remarkable Assassin’s Creed for the Xbox 360 and PlayStation 3 formats (with a PC version released at the end of last month) is one of the most successful new videogames of the past year, a hit with both players and critics. The game’s complex story involves a barkeep in the year 2012, who, through a machine called an Animus, is able to re-live the memories of one of his ancestors who lived during the Crusades — an Arab assassin named Altair Ibn La-Ahad.
Much of the game action comprises La-Ahad wandering through different Middle Eastern cities in the year 1191, hunting down various people who are propagating the Crusades. La-Ahad is one stealthy dude, and there’s much pleasure to be derived from skulking about the incredibly well-rendered bazaars, deserted alleyways and scenic rooftops of places like Jerusalem and Damascus as he searches for his prey and encounters all sorts of resistance from various foes along the way. It’s a fantastic yet realistic world, and the intricate sound design is one of the game’s many outstanding features.
Photo: Courtesy Ubisoft
Recently, I interviewed the lead audio designer for Assasin’s Creed — Mathieu Jeanson of Ubisoft Montréal — about some of the challenges of working on this sophisticated and spectacular game.
At what stage in the game’s production was the sound team brought in?
At Ubisoft, it’s standard that triple-A games have their own audio designer from pre-conception until release. Assassin’s Creed initially had a different audio designer during pre-conception, but then I came on as lead audio designer around the end of pre-conception. My role was to define how the audio would correlate to the game and to establish general audio artistic direction. This was based on brainstorming meetings with the creative director and game designers. I’ve found that establishing a good relationship and solid communication with the design and creative team is essential to the success of any project.
Once we officially started the game production, I brought on three other audio designers to form the sound team. The roles and responsibilities of each member were divided by specialty based on their audio vision and experience. One led sound effects and Foley, one managed the music and ambience, and one oversaw all voice content: onomatopoeias, AI dialog and script dialog. I was in charge of the artistic direction of all audio, team management and planning, tools, technical issues and studio booking.
I presume because of how long it takes to get finished animation that the sound team probably had to work from a combination of story boards and/or crude animatics when developing sounds for the action?
Yes and no. In general, it’s best that audio production wait to have the final design and game system more or less in place before starting to build the overall sound-scape. In a game like Assassin’s Creed, there are thousands of blended animations that needed audio implemented, and our time frame didn’t allow us to wait for all animations to be final before recording or editing the sound.
Foley recording was planned to follow the animation-validation process, so a lot of management and validation were required to ensure that audio assets would remain coherent with their animations. Some parts of the animation design continued to adapt up until the end of production. But as with all aspects of audio, we often integrated placeholders, and at the end we replaced them with the final versions, sometimes requiring some re-integration and tweaking in the engine.
What sort of gear did you employ for original Foley and effects recording?
For Foley recording, the signal path we prefer is a Neumann TLM 170R microphone fed to a GML 2020 module, one of the best recording channels ever built, from our experience. We then record everything to Pro Tools HD, which is controlled using a ProControl surface. Depending on the character we’re after for a given sound, we might reach for outboard gear or sometimes plug-ins just to obtain the specific color or that extra bite factor. In the digital world, we really like the McDSP products, Filter Bank, Analog Channel — all great-sounding stuff that’s easy and fun to use and feels like the real thing.
For several years now, the Ubisoft Montréal studio has had complete audio facilities to accommodate all in-house audio production — SFX, ambience, Foley, mixing and voice recording.
Basically, we start from a capture of the animation from the game editor. The final output given to us is a video file with a stable frame rate and frame size supported by our DV Canopus playback system: 720×480, 29.97 fps, QuickTime DV.
What sorts of things went into the sword hits and horse gallops and such? Did you use libraries at all?
For the sword impacts and combat elements, we experimented with hitting and scraping a great number of different metals together — steel and aluminium rods, various blades — just to re-create the different impact intensities and sword types used in the game. For example, for just one sword-body impact, we recorded approximately five to eight layers combined together.
For the horse footsteps design, there was no way we could lift samples from a commercial sound library that would suit our needs. So we actually organized a field recording session away from the city, and used a real horse hoof with horseshoe on fabricated surfaces just to get the movement, the weight and the level of realism that we’re after.
In general, though, we will use a mix of in-house and commercial SFX libraries for ambience elements like wind, background noise and some specific elements that we don’t have in-house.
But we spent many months recording original assets for pretty much everything that we could hear in the game. For example, the footstep system uses more than 1,500 original recorded samples. We managed 22 surface materials, with 14 different step intentions — sneak, walk, run, jump, land, pivot, et cetera — including three to eight variations for each intention per surface.
For the crowd walla, we started with ambience from commercial libraries as a base layer. We also organized custom walla sessions, including varied languages and accents, but, in fact, most of the material came from previous Ubisoft projects.
At what bit rate were sounds recorded and delivered?
Internally, all audio files are at 24-bit, 48kHz broadcast WAV from recording to editing. Files are mostly delivered at 16 bits, 48 kHz to the audio designer for implementation, but that doesn’t mean that all files play back in the game at this quality. We keep audio as high-quality as possible, while also optimizing to respect the memory budget, which is established by the lead programmer.
What sorts of sound considerations were there for parts of scenes where there is a lot of repeated action?
Watching the same animation repeatedly is much less annoying than hearing the same sound being played in succession by the animation, so often it’s the role of the audio to simulate visual variation by using different SFX variations. It’s always a challenge to create a realistic world that doesn’t sound the same everywhere. But one of the biggest challenges was to manage the AI dialog properly since it was all driven by the AI system, and the complex logic of our AI system didn’t always automatically produce natural dialog behavior.
At what point were the cool reverbs and other sonic manipulations done — before or during the mix?
The basic game mix is done progressively throughout the game production. There are so many assets to manage that we don’t want to wait until the final mix session to tweak all the little details. Two weeks before shipping, we play the whole game, step by step, and tweak all remaining levels that remain to be balanced. Most of our effects are pre-rendered to save CPU and memory, but we’re always using software reverb at the console output to simulate the acoustic environment of each location. We used a lot of volume and panning manipulation done in real time, such as dynamic ambience mixing based on the character height to simulate different height perspectives.
Was the sound for the game mixed using a conventional console or “in the box”?
All sound in the game was edited, recorded and mixed on “in-the-box” systems. Years ago, we worked with an SSL Axiom-MT and Pro Tools 24 as a recorder, but have since upgraded the main studio with Pro Tools HD on Mac-based systems using the ICON as a worksurface.
At this point, do sound designers for Xbox games have to vary bit rates for FX as they did on earlier, less sophisticated platforms? That is, do you have to make determinations about which FX “need” to be heard more clearly than others?
Initially, Assassin’s Creed was planned to be shipped only on the next-gen platforms Xbox 360 and PS3, so we didn’t really have to worry that much about platform-specific assets and console-specific quality. We were more concerned about how much RAM would be available for sound on the PS3. But, fortunately, the PS3 has a hard drive, which we used to stream a lot of stuff that we could put in RAM on the Xbox 360.
The quality difference between platforms is mostly due to the consoles’ proprietary compression formats — XMA in the case of Microsoft and a variation of ADPCM called VAG for Sony. Technically, the quality is variable from console to console, but unless you’re able to compare both platforms back to back, you won’t detect a significant sonic difference.
WATCH: More Stills