Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now


Can You Be Replaced by a Software Algorithm?

For decades, mastering was considered an arcane, difficult art capable of being mastered (sorry about the pun) only by those with Jedi-level skill sets.

For decades, mastering was considered an arcane, difficult art capable of being mastered (sorry about the pun) only by those with Jedi-level skill sets. Much of this was due to vinyl, which involved a complex set of tradeoffs: level, length, distortion, bass and the infamous RIAA curve. And, of course, there was the value of a professional, trained set of ears that could evaluate music and oversee the process of translating audio so it could play over multiple systems.

Enter digital audio. After vinyl’s constraints disappeared, all you needed to master was quality plugins and computer-based editors— and great ears. Then came the internet, and people could send files off via email to be mastered.

Now we have, where for $39/a month, you can submit mixes and have them mastered by an algorithm. According to the site, “LANDR incubated and refined algorithms developed over eight years of university research, testing and tweaking based on feedback from trained audio experts. Our team is composed of music industry veterans…who know exactly what the mastering process needs to deliver…Our system is built around an adaptive engine that ‘listens’ and reacts to music, using micro-genre detection to make subtle frame-by-frame adjustments selectively using tools like multi-band compression, EQ, stereo enhancement, limiting and aural excitation based on the unique properties of the song.”

Then there’s AAMS, a software program that “provides suggestions for Equalizer, Multi-Band Compression and Loudness settings with internal DSP Processing to make all such audio corrections within the AAMS Program and creates a final mastered audio file…Essentially, the program takes a specific audio file and then compares the mixing settings to over 100 different styles within its very own database.”

Now before you get outraged, consider that many mixes can indeed sound better with a little bit of limiting and a balanced frequency response. Those processes can be done easily; add a few user controls to optimize those options, and the result is cheap, fast and produces results better than the original.

But an algorithm can’t decide to give a 2 dB boost to the snare hit that comes just before the chorus, bring up the intro by a dB before the vocals enter, take out two bars of an overindulgent solo, add a hint of reverb or customize the fadeout. The algorithm can’t ask for you to leave a few seconds at the beginning so you can take a “noiseprint” for noise reduction, or ask the artist to provide a different mix with vocals up 1 dB because adding dynamics changed the mix’s balance. And if you’re putting together an album, algorithms can’t choose the right order, or decide where to crossfade between cuts.

I was given a track and when I listened to it, I realized it could be mastered as a more ambient track, or more of a dance cut. So I ended up doing both. Which would the algorithm have chosen? (The artist chose both.)

My concern isn’t that “mastering by algorithm” has no value, but that mastering will cease to be seen as the final stage of the creative process and instead as a purely technical exercise. Another concern regards algorithms in general, as they base the future on the past. For example, online streaming services love to “suggest” new music based on what you listened to in the past. Yet with EDM on YouTube, views by 35 to 49-yearolds grew by 80 percent last year. If they listened to the music they grew up with—Janet Jackson, Elton John, Usher, Mariah Carey—I don’t see an algorithm saying “Hey, you might want to listen to Armin Van Buuren or Ilan Bluestone.”

People often consider the 1960s as a period of unprecedented musical growth and innovation; I believe a lot of that was due to “decategorization”— you could see a concert that opened with a folk singer, followed by a jazz group, and headlined by psychedelic rockers. FM radio playlists were wide open. Or consider how Elvis Presley combined R&B and country to create something compelling…or the “happy accidents” that became the “hooks” we remember from various hits.

We create music, not audio. Art is indeed enamored of chance, and unless software algorithms can become artists as well as technicians, we’ll need humans involved in the process of creating great music.

Author/musician Craig Anderton has given seminars on music and technology in 38 states, 10 countries, and three languages. Listen to his music at