There is no argument for dumbing-down a recording, while recording

Some of my colleagues have the opinion that recording and processing audio at 44.1 kHz/16-bit yields more than enough resolution. “You can’t hear anything above 20 kHz anyway, so what’s the point of using higher sample rates?” I’ve heard that more than once. I don’t make myself out as someone with “golden ears” who can hear higher frequencies than a Labrador pup, but I certainly can hear two gnats having sex at 100 yards. And for my money, if I have to record those gnats as part of a special presentation on the National Geographic channel, I want as much resolution as possible—so 44.1 kHz ain’t gonna cut it. I want more.

There’s no denying the boundaries of human hearing: “20 Hz to 20 kHz, and that’s before you hit 18 or 20 years of age when you start to lose the high-frequency response.” True that may be, but we’ve read plenty of studies revealing that sounds above the audible band affect what’s within the audible band, whether it be imparting brightness or a sense of air and room ambience that’s difficult to measure.

Fifteen or 20 years ago there might have been a few valid technical reasons for recording, mixing or mastering at 44.1/16. For one, we had the CD standard, which was limited to those numbers. DVD-A and SuperAudio CD promised more but died a quick death, at least in part due to splintering the market with competition.

Then we had the issue of CPU power. Computers simply couldn’t keep up with simultaneously recording 30 tracks at 88.2 kHz or higher sample rates.

Those days are fading. The CD is long gone as the preferred audio format (perhaps vinyl is once again!), and downloads can be had in a variety of sample rates and bit depth. Computers are faster and storage is cheaper, so tracking a session at 96 kHz/24-bit is a reality.

So what’s the holdup? I’ve heard this: “Well, people are going to listen to my work as an MP3 on their phone, so what’s the point of recording at high-res.” What???!! Depending on the phone, the file may be played in mono, but we’re not mixing in mono! Long before I worked on my first record, engineers far more capable than I were killing themselves to make great stereo recordings that were broadcast on AM radio. I don’t recall them saying, “What’s the point?”

The point is that we are documenting music history. Some of it is fluff and won’t be heard again after running its course on a Top 40 station, but if we could predict what music would disappear and what would stick, we’d be at the roulette table in Vegas and not behind a mixing desk. When you’re in the creative process, you’re usually too close to be objective about the longevity of a recording, so why not treat every project like it means something?

There are still some legitimate concerns about working at higher sample rates, but at this point in time most plug-ins will run at 88.2 and 96 kHz, if not 176.4 and 192. Hopefully your computer can keep up with the pack. And if it can’t, there are solutions from UAD, Waves and other manufacturers designed to relieve your CPU from a heart attack by moving the processing off the host CPU. Maybe multitracking at 192 kHz isn’t that far off.

In spite of the fact that much of our work may indeed end up being played in MP3 format on a phone or over YouTube (yuk), that’s no excuse not to push the art. I’m sorry, but I don’t record or mix just to satisfy the person who’s listening on a phone while doing their laundry. I do take that into consideration. I’ll listen to and make adjustments to a mix after a test conversion to MP3, just like any engineer would listen on an Auratone to make sure their mix will sound balanced on a car stereo.

In fact, I’d say that because a lot of our work ends up being heard on inferior formats, that’s precisely the reason to squeeze every dB of resolution while you’re in the process. If it’s going to be converted to a low-res format, that’s all the more reason to make the original recording as good as it possibly can be. You never know if it might end up on vinyl.