Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Craig Anderton’s Open Channel: AI and the Fingerprint Factor

Craig shares how to verbally smack down anyone who suggests that AI scraping the web for examples is the same as musicians drawing on their influences.

“The key to success is sincerity. If you can fake that, you’ve got it made.”
—George Burns (though I’m guessing that the phrase pre-dated him)
Craig Anderton
Craig Anderton.

A previous column, “Strip-Mining the Emotion from Music,” covered how the misuse of technological tools can remove emotion from music, but I didn’t see the full scope of the problem: The erosion of emotion. If I hear the following argument one more time, I will reluctantly conclude that while there may be intelligent life on Earth, it may not have a soul:

“Musicians who diss AI for scraping the internet are dishonest. All humans draw on influences—the music they’ve heard their whole lives. AI draws on influences, too. There’s no difference, so just shut up while AI song generators get clicks for me.”

As a public service, this column collects the data you need to deal with anyone who thinks that argument holds any more water than tissue paper in a downpour.

INFLUENCE V. IMITATION

Technically speaking, human composers and AI that models composition are pattern processors. True. They draw from existing materials to create something that didn’t exist before. True.

Then the analogy comes to a screeching halt. Humans make choices about which influences to internalize, reject, modify or even subvert. Then those influences are filtered through memory, emotion and context.

AI doesn’t choose in any comparable or meaningful way. So far, AI’s “intelligence” is the integration of statistics. It manipulates patterns based on training, not emotional relevance or personal experience.

Nor does it have any memory other than what we give it. Research increasingly shows music is inextricably linked with memory. Whether it’s remembering the music playing when you had your first kiss, or dementia patients briefly recognizing their past, music is part of the human OS.

Furthermore, it’s crucial to differentiate between influence and imitation. Humans transform their influences, like Jack Bruce citing Bach as an influence on playing bass with Cream. Did his bass playing imitate Bach? No. Was he influenced by what he learned about melody and counterpoint from Bach? Yes. If you prompted AI to create a new Jack Bruce bass part, it could incorporate only Bach influences pre-baked into his playing. It couldn’t incorporate other aspects of Bach’s influence that hadn’t surfaced yet.

Imitation is a one-way street: It gives, you take. Influence is a dialogue with the creative process. Upon recognizing an influence (consciously or unconsciously), creativity takes over and decides how to process it. AI is simply doing input-output mapping. It can learn to recognize tension and release in music, map it and model it—but it can’t reinterpret it creatively.

THE FINGERPRINT FACTOR

Fingerprints are unique to every individual. A human’s music carries a personal fingerprint of that human which incorporates quirks, pain, memories, joy, flaws, biases and more. AI outputs are shaped by existing data and the prompts you feed it. It has no identity. Even when trained on a specific artist, it mimics the what of the artist’s style without understanding the why.

Miles Davis’ music and persona were shaped by his training as a boxer. Can AI filter its music through what it was like to grow up in East St. Louis and train with pros in a local gym? When Miles recorded his outstanding Jack Johnson album, he wasn’t imitating the feeling of being a boxer. He was influenced by having been a boxer.

Sure, the surface-level comparison—“both AI and humans use influences, so shut up”—sounds valid. But humans live their influences dynamically. AI scrapes its influences statically.

And let’s loop back to my article about the difference between sound quality and emotional quality. A recent study (by Kimaya Lecamwasam, MIT Media Lab, Cambridge, Mass., and Tishya Ray Chaudhuri, Myndstream, London, England) found that listeners rated human-composed music as more emotionally effective, even when they preferred the sound of AI-generated tracks. That suggests emotional authenticity does matter, even if it’s hard to quantify, and felt rather than understood intellectually. (Sidebar: it also means listeners really don’t care what mic preamp you use. Sorry!)

Some think that “prompt engineering” can convey emotion—creators feed descriptions like “drunk lover with a broken heart on a rainy day” into an AI generator. AI matches these cues with training data, then dutifully generates music that aligns with similar patterns in its database. But it can only align statistically, not emotionally. AI can never filter being a drunk lover with a broken heart through music. I’ve always said my music is nonfiction.

AI can’t say that.

Discover more great stories—get a free Mix SmartBrief subscription!

To be fair, some research systems use listener data (like pupil dilation or heart rate) to adjust music dynamically and trigger emotional responses. I could see this being potentially helpful in medical applications. But again, those emotional responses are based on what statistically elicits an emotional response. People may react to a statistic. That doesn’t mean they’ll fall in love with it.

It’s arguably true that thanks to AI, “creators” who think there’s a shortcut to making compelling music can take a shot at getting their revenge on the musician who stole their sweetheart. And they’re welcome to think they’ll take over the world as musicians go the way of the dinosaur.

But as of this writing, it’s not The Velvet Sundown with the best-selling and most-streamed recording globally. It’s Blackpink. Maybe it’s because K-pop has a devotion to emotion…and the Fingerprint Factor.

Close