Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Craig Anderton’s Open Channel: Does AI Stand for “Attorneys Incoming”?

Last September, I wrote an Open Channel about AI. Is it too soon to revisit the topic? Well, AI is reshaping the world at an exponential rate.

Craig Anderton
Craig Anderton

Last September, I wrote an Open Channel about AI. Is it too soon to revisit the topic? Well, AI is reshaping the world at an exponential rate. Soon, AI-related articles will need to be time-stamped in date:hours:minutes.

When social media was just a glimmer in the Internet’s eye, no one knew it was going to rip apart the fabric of society. No one with a cordless phone saw the straight line between that and Kodak going bankrupt. AI is already raising legal questions for which no one has an answer.

Intellectual property disputes over AI-generated content are heating up. For example, according to Reuters, Getty Images has initiated legal proceedings against Stability AI. Getty alleges the copying of millions of its images. In a statement, Getty Images said that “[Stability AI] chose to ignore viable licensing options and long-standing legal protections in pursuit of their stand-alone commercial interests.” Stability AI fired back that “anyone that believes that this isn’t fair use does not understand the technology and misunderstands the law.”

(To check out a current suit, try Andersen v. Stability AI Ltd, U.S. District Court for the Northern District of California, No. 3:23- cv-00201.)

Today, it’s about images. Tomorrow, add music.

Bits and pieces of existing recordings would be recognizable, but content from engines that analyzed chord progressions and melody lines would be more likely to obscure infringement, even if done consciously.

Suppose Disney starts using AI to generate movie soundtracks. Then, suppose that AI generates the title song for a movie with a prominent melody line that’s identical to “Yesterday.” Disney could argue that it was unintentional, but there’s the precedent of George Harrison’s infringement case involving “My Sweet Lord.” He was found guilty of “subconscious plagiarism.” No intent was implied, yet he had to pay damages.

So, in the fictional Disney case, who gets sued? Probably Disney, for not noticing the melody was identical. But could Disney then turn around and sue the company that created the AI engine by claiming negligence and a disregard of copyright law that put Disney in jeopardy? Or sue the “composer” who entered the search terms that came up with the song, then passed it along to Disney?

Or suppose an AI engine writes a hit song. Who gets the royalties? The song would not have existed without the AI engine. But it also would not have existed without someone specifying the parameters under which the AI engine created the song. And what about the music the AI engine “borrowed” to create the song? Are the original rights holders owed anything?

Craig Anderton’s Open Channel – Mastering the Art of Mastering

Craig Anderton’s Open Channel: Profiles in Gear Lust

How will copyright laws change? Adding a little © isn’t going to cut it. Maybe artists can include a key in any digital work that indicates to cooperative AI engines that the material is off-limits for scraping. But any attempt to do that will be challenged by those engines’ creators and ignored by pirates. Or, maybe artists will encrypt their digital media, and consumers will buy a key to unlock the encryption. But what prevents a company from paying for the key and then adding the content to its AI engine?

If the Internet alone didn’t mean the end of copyright as we know it, “AI + Internet” will create unlimited controversies and legal maneuvering around copyright law. Will “fair use” just throw up its arms and say anything that any artist releases into a digital space is fair game for anyone to use in any way they want? Will any mechanism provide financial compensation to the musicians and artists from whom AI engines draw their material for commercial purposes? I have no answers. Neither does anyone else. Stay tuned.

The world is turning upside down. Job security used to mean doing creative work that could never be displaced by automation. Oops. In the future, maybe job security and the big bucks will be in plumbing and road services, because AI can’t fix a broken pipe or change a flat at 3 a.m. What’s old is new again, and perhaps that relates to something of interest to us all—product reviews and tech articles.

I’m already seeing web content about music technology that looks suspiciously like ChatGPT should be credited as a co-author. But as we all know, AI can give incorrect answers, whether as text or in an image of a guitarist with eight fingers. What’s worse, incorrect answers then get folded indiscriminately into an existing body of knowledge and inch closer to being accepted as correct. I know for sure this will happen—because Nostradamus was my grandfather, and I’m a Nobel Laureate. Welcome to the George Santos-ization of information.

Will this mean that publications with editors and a staff that vets articles will regain their former positions of authority? Maybe consumers will start to look at any random “content” on the Web as suspect, and yearn for something like—well, Mixonline.com—where they know the material was vetted by humanoid bipeds who have a track record of accurate reporting.

I’m seriously thinking of adding the following disclaimer to all my articles: “This article was written, researched and proofed for accuracy by one or more human beings.”

Welcome to 2023. Cue sound effect of genie exiting bottle.

Close