At first, plug-ins were software replacements for existing hardware. Now, with over a quarter-century of development, plug-ins are exploring worlds that go way beyond analog emulation.
Waves’ latest technology, incorporated in the CLA MixHub (designed with Chris Lord-Alge), breaks the paradigm of working on only one channel within a single plug-in. MixHub isn’t a processor per se; it’s a controller for plug-ins. Conceptually, it’s like a DAW’s console view, expanded to show as many channels as possible, and with all the channels grouped logically. What’s different is the fluidity with which you can edit individual channels.
Software Tech: The Latest DAW Updates Serve Up Compatibility, Parity, Completeness
After inserting a MixHub plug-in in every channel you want to process, you then assign each channel to any one of the MixHub’s 8 slots. For example, when working on the rhythm section, MixHub can bring together the multitracked drums, bass, and rhythm guitar processors into a single interface. You can then adjust them as a unit, instead of going back and forth among individual channel plug-ins. There are four different views per bucket: input, EQ, dynamics, or output for each of the eight slots. However, you can also flip MixHub into a Channel view, which exposes all the channel strip parameters for any one MixHub channel.
Because one MixHub plug-in can accommodate up to eight buckets of eight tracks, a wide enough monitor could display 64 channels of EQ or dynamics—and a touchscreen monitor would resemble working on a giant mixing console. There’s an insert point for an additional Waves plug-in, although you can always insert plug-ins before or after the MixHub plug-in that’s inserted in a mixer channel. The bottom line: MixHub is a plug-in that focuses on creating a workflow, not just a particular sound (in this case, a model of Chris Lord-Alge’s console).
Another category of plug-in uses DSP to take audio apart and put it back together again, like AudioSourceRE’s DeMIX and RePAN. DeMIX isn’t a magical solution that separates out the tracks from your 48-track mix; think of it more as a toolkit for creating stems that can separate out vocals, drums and other elements with varying degrees of precision. It also underscores comments in the February Software Tech column about how the cloud is becoming an essential part of our world—after choosing the function you want to use, your data goes to the company’s servers for analysis, which then returns in “rendered” form. It takes some effort to “teach” the program about the material you want to separate, but it’s an amazing tool. I can’t imagine anyone doing remixes who wouldn’t want to become proficient at DeMIX.
Zynaptiq, beyond its innovative processors, has also been a pioneer in the field of “unbaking the cake.” It started in 2012 with Unveil, which emphasizes or de-emphasizes reverberation in existing recordings (even mono). You can take out room reverb, or for that matter, increase ambiance—but it can also work on other “undesired” sounds, like removing background sounds from on-location recordings. Zynaptiq followed up with Unfilter, which removes resonance, comb filtering, and other unwanted filtering effects to linearize a filtering response, and Unchirp, which removes artifacts from lossy compression. My favorite example of their technology being used for the good of humanity was when the company developed a plug-in for Danish TV that suppressed those incredibly annoying compressor horns that fans use at soccer games. On a more prosaic level, their Unmix:Drums can boost or remove drums to a great degree, in real time, from mixed music.
iZotope is another company with expertise in separating out sounds, like how its RX7 restoration suite can remove pops, clicks, noise, hum and crackles, fix distortion and more. But RX7 now includes a music rebalance feature, with sliders for voice, bass, percussion, and “other” (i.e., what’s not voice, bass, and percussion). It’s surprisingly effective—it may mean the end of asking for “vocals +1” and “vocals-1” masters. Of course, with extreme changes, you can hear some artifacts, but for rational level changes, you can’t hear the effects of the processing.
What makes these types of “deconstruction” plug-ins possible is more powerful DSP, extremely creative programmers and a cross between artificial intelligence and machine learning that can pick out significant audio from what you don’t want. When you consider that this kind of technology is still young, I can only imagine what kind of products we’ll be covering in 10 years.
Craig Anderton’s new book series, the “Musician’s Guide to Home Recording,” is now available from Hal Leonard in softcover, and Reverb.com as a series of eBooks. Visit craiganderton.com for more.