The 2016 AES just wrapped up, and I saw some great new gear. Pro Tools 12.6 is rocking new clip-based processing, among other updates, ATC’s SCM 12 is the company’s most affordable monitor to date, and Mojave’s exciting and tubey MA-1000 travels upmarket at $2,495. But these are all names, and gear we’re familiar with; what’s really new in audio? I found out at an out-of-the-way group of booths tucked upstairs and down the hall at the Los Angeles Convention Center. The Audio for Virtual and Augmented Reality exhibits (AR/VR) is where the fresh and futuristic gear, concepts and workflows were hiding.
If you’re not familiar, AR was ashamedly brought to the news cycle with the Pokémon Go app that has blown up all over the world. Even Norway’s PM was caught playing during a yawn-worthy debate in parliament. On the other side, VR is the immersive, computer-simulated environment created when you put on a headset and headphones and interact with a virtual world, event or game. This is the killer app that has Steven Spielberg, Live Nation, Facebook, CNN, HBO and NBC Sports partnering with companies like NextVR, Jaunt, and the Virtual Reality Company, while Chinese and U.S. investors pour big dollars into the growing technology.
What are the new names and gear being developed for this jump into the future? Some you already know: Dolby, DTS and Sennheiser are in with Atmos, Headphone X and Ambeo VR products, but there are others who are taking the process above and beyond. Gaudio, Visisonics, Dysonics and Aurelia Soundworks all had impressive demos at the show, and they are not messing about.
Gaudio has developed binaural rendering technology that has been adopted as an international standard named MPEG-H 3D Audio Binaural Rendering. Gaudio’s tech team won over ISO/IEC MPEG after competing against global tech giants. It’s all about making the experience more immersive by playing with Head Related Transfer Function, the loudness and frequency gap that happens when sound travels around your head and enters the ear through the complex shape of the external ear or pinna.
I learned more about HRTF and how it works in VR from my old friend Michel Henein at the Visisonics booth. He explained the company’s technology for immersive audio as the difference between object-based and scene-based panning. For example, without some serious DSP, you can’t easily create believable audio in real-time where you, the viewer, are moving in a complex scene. For example, where you are freely moving in an environment where one wall is carpeted, another glass, the ceiling is 40 feet above you and you’re standing in water. This requires scene-based panning where complex acoustics are rendered quickly as you move around. Visisonics’ move into VR for games, film and more came from the development of its Realspace Audio Panoramic Camera that has five HD cameras and 64 microphones embedded around an 8-inch sphere. The device was originally developed as a tool for industry. For example, Tesla uses it to pinpoint air leaks inside a car as it sits in a wind tunnel, the Naval Research Lab uses it for submarine research, and you can isolate a conversation in a noisy, crowded environment (Ethan Hunt has two of these I’m sure).
I saw the array working in the booth, all rendered flat using Visisonics’ Realspace Acoustical Analysis Tool, where I could see the booth and surrounding environment, including me, in 3-D. Whenever I spoke to Michel, or he spoke to me, I could see a splash of light localized around our mouths. This was happening for every sound source around us in 360 degrees, which I could see on the screen. All this comes about because the audio and video are synched using an internal FPGA processor, which is later collected and sent to a laptop using a USB 3 cable.
Next door at the Aurelia Soundworks booth it was less about the tech and more about spreading the news about the company’s process outline and projects. It sells an A-to-Z production process involving sonic capture, optimization of assets, post-production sound design and soundscape composition, then Periphonic 3D audio mixing in a custom 360-degree Sound Dome. Then it delivers stable and consistent results for almost any audio or surround format. Aurelia’s clients include NBC, for whom it produced the Blindspot season teaser for YouTube; Paramount, where it did sound design and a 360 mix for Zoolander No. 2 for the Jaunt VR platform; and Inside Syria for ABC News, among many others.
Dysonics uses its RondoMic, a high-fidelity 360-degree mic array that captures any environment in a native binaural format. Like Aurelia, Dysonics also has an end-to-end service for capture, processing and delivery of VR audio. In addition, it offers a RondoMotion headphone motion tracker that works with any head-worn stereo transducer allowing users to interact with sound relative to their head movement.
New gear, new processes and new delivery formats mean new jobs and directions in audio. Unlike 3-D TV and 5.1 audio, which never really caught fire for consumers, VR and enhanced audio for AR have real possibilities because they’re driven by a killer app— immersive, and affordable interactive video. You may think it’s not ready… but you may not have to wait as long as you think!