MixTips: Phil TanAtlanta-Based, Worldwide Reach 7/01/2013 5:00 AM Eastern
Don’t be fooled by the quiet, near-studious demeanor. Beneath the surface, hit-making mix engineer Phil Tan has an assassin’s knack for timing, a humorist’s sense of wit and a ninja’s ear for subtle power. He has worked on records that have sold more than 250 million units worldwide, and he has three big-time Grammy Awards: Best Contemporary R&B Album, 2005, Mariah Carey’s The Emancipation of Mimi; Best Rap Album, 2006, Ludacris’ Release Therapy; and Best Dance Recording, 2010, Rihanna’s Only Girl (In the World).
A 1990 graduate of Full Sail, he moved to Atlanta and interned at Jon Marett’s Soundscape Studios, where he met many of the people who helped drive his career and remain collaborators today, the most notable being Jermaine Dupri, right at the start of his explosion. Also, LA Reid, Outkast, Rico Wade and Organized Noize, Dallas Austin, and .38 Special’s Jeff Carlisi, with whom he recently started Hightone Talent, an incubator for artistic talent. Their first is a musician: 17-year-old Hallie Jackson.
He works mainly alone, with tracks delivered to his private studio in all types of formats. He works in Pro Tools, both TDM and HDX, with Avid D-Control, monitoring through Dynaudio M-1s and JBL LSR6332s (both powered by Brystons), and RCF Mytho 8s. Mic pre’s: John Hardy M-1, Millennia HV-3B, Universal Audio LA-610. Compressors: Tube-Tech CL-1B, Manley Variable-MU. Stereo bus EQ: Inward Connections DEQ-1. Most-worked plug-ins: UAD-2, SoundToys, Softube, McDSP, Waves.
So, Usher and Ludacris? Both amazing, very different.
Very rarely can you treat vocals the same exact way on every song, even for the same artist. You have to make decisions based on the key of the song, the tempo, the arrangement and probably most importantly, the emotional tone.
Usher’s more of a crooner, so I tend to think smooth—or smoove as they say here in da South—but he’s very dynamic, so I’d use a bit more compression so the quieter parts don’t get lost. Luda has one of the biggest voices ever—he has no problem filling up a track. Sometimes, if there are other artists featured on a track with him, one of the more difficult tasks is to make sure he doesn’t overpower them. He wants the listener to feel like he’s right in front of their face, spit and all, so I wouldn’t compress him as much. A lot of automation on both of these guys, both from a level/volume standpoint, as well as plug-in parameters, like threshold and release, depending how hard/soft they’re delivering the lines and what needs emphasizing.
Mariah and Rihanna?
Much of the credit for the mixes I’ve done for these two incredibly accomplished artists must go to them, who deliver outstanding performances time and again, and their recording engineers—Dana Jon Chappelle in the earlier Mariah albums and Brian Garten today, and Marcos Tovar for Rihanna. My job is simply to make them shine. Brightly. And not get in the way.
My thought process—or lack thereof, because I try to focus more on how things feel, not necessarily on technical correctness—is really the same as I would approach any other vocal performance, male or female. What is the song trying to communicate? That message has to come across. If it’s fun and playful, let’s make sure that comes across, and the supporting parts in track that contribute to that enervation are properly featured. If it’s quiet and personal, let’s try to make the listener feel like the artist is singing/speaking directly to them—usually less processing in this case. If it’s grand and glorious, let’s make it sound massive, like you’re listening to a performance in a concert hall.
Beefing up drums. Do you commonly use real and sampled instruments? A hybrid?
Depending on what the drum sound needs to do in the track, I may use EQ, compression, distortion or combinations of the three to beef things up. SoundToys’ Decapitator, Crane Song’s Phoenix, Slate Digital’s Virtual Tape Machine are some of the distortion plug-ins I use regularly. Just depends on what flavor is called for.
The main difference for me with real vs. sampled instruments is editing and automation. With live instruments, if they’re a bit loose, pocketing them may be necessary, so they feel tighter. If it’s played live, then chances are the hits aren’t all the same levels, so some automation might be necessary. Sometimes you gotta be careful not to overdo all that, though, so it doesn’t sound too perfect and stiff.
Compression. When, where and why?
The first reason to compress is for level control purposes. Second is for added tone or character—the Empirical Labs Fatso is one example for this purpose; I use the UAD-2 version. Third is for effect, like side-chaining. Fourth is for gluing, like on instrument subgroups or the mix bus—here I typically use the SSL bus compressor and Shadow Hills Mastering Compressor, again the UAD-2 versions. Parallel compression can be helpful, too, especially if you need something to stick out just a bit more.
Any thoughts on quality? Either in recording or in consumer delivery?
I get asked all the time, “What can I do to get good mixes?” My answer is usually, “Get good recordings of your parts.” If you’re mainly working on programmed beats, take the time to pick good sounds that complement each other, both musically and sonically. If you play an instrument, take the time to learn mic techniques—choices and placement—and experiment until you get the right sound.