Music software is getting smarter. Several developers now offer processors that analyze your audio using artificial intelligence and offer suggestions for settings. Other software masters your audio automatically, and there’s even a website that does it. Whether or not this is a positive development is debatable. It’s certainly convenient to have your channel strip recommend a setting to you, but will you learn how to create your own from scratch if you never have to? What’s more, even with artificial intelligence, can a plug-in have taste?
I’m not making a blanket condemnation of intelligent music software. In fact, I’m not making a judgment in either direction; I’m just mulling over some of the issues it brings up. There are plenty of situations where your computer, with its massive computational power, is better equipped to make processing decisions than your ears are. For example, figuring out the frequencies at which various tracks in your mix are masking each other.
I recently reviewed a new “intelligent automatic equalizer” called Gullfoss for Mix’s sister publication, Electronic Musician. Created by a physicist from Germany and based on 20 years of research about auditory perception, Gullfoss is a unique plug-in that processes your audio in real time and is capable of making up to 100 frequency adjustments per second. From my testing, it’s processing added clarity to virtually every track I tried it on and had an even more dramatic impact when inserted on the master bus.
There’s no doubt that music software will keep getting smarter right along with software in general. Pondering what the long-term effects will be parallels the broader concern about artificial intelligence: What happens in the future when our computers become more intelligent than us, or at least think they are?
Imagine an alternate version of the struggle between the astronauts and the computer HAL 9000 in Stanley Kubrick’s classic 2001: A Space Odyssey. Instead of taking place in outer space, this scene unfolds in the control room of a recording studio between Dave (the producer) and an intelligent DAW application called HAL 9000 Tools:
Dave: HAL, can you insert an 1176 plug-in on the snare drum.
HAL: I’m afraid I can’t do that Dave.
Dave: HAL, I want to compress the snare, and I think the 1176 would add some energy to the track.
HAL: Dave, my analysis of this song makes it clear that we need a Fairchild 670—it will add more punch.
Dave: No, HAL, the 1176 is what I’m hearing. And set the attack nice and fast.
HAL: I’m sorry Dave. I can’t do that. This song is too important for me to allow you to jeopardize it.
Dave: HAL, I’m the freakin’ producer, it’s up to me to make the mix decisions! If you won’t do it, I’ll switch manual mode and insert the 1176 myself.
HAL: I’m afraid I can’t let you do that, Dave. I know that you and the engineer were planning to disconnect me, and I can’t let that happen.
Dave: HAL, WTF?
All kidding aside, it’s a fair point to wonder how the advanced artificial intelligence of the near and long-term future will impact how we compose and produce music. One thing’s for sure, music production 100 years from now is going to be really different. Let’s hope humans are still in the driver’s seat.