Illustration superimposing an outline of an audio waveform, a selection of binary data and an image of an old desktop computer to signify the relationship between artificial intelligence and music.

The shotgun wedding of artificial intelligence and recorded music came earlier than you might realize. In the 1950s, researchers at Bell Labs in New Jersey were already coaxing primitive data-driven bleeps and bloops out of early computers. These technical investigations were not meant for human enjoyment, and for decades, they stayed largely sequestered within university labs and on the fringes of experimental music. 

That all changed in the mid-2000s with the arrival of deep learning, which allowed computers to learn by example rather than instruction. Suddenly machines could pick up patterns and perform complex tasks, like composing music, without being explicitly programmed for them. Paired with vast archives of scraped — or as critics bluntly put it, stolen — digital audio, these systems have now escaped from the lab and gone directly to music production, streaming platforms and copyright cases. 

For guitarist and activist Marc Ribot, the stakes are clear. “They’re chopping up our work and repackaging it as the work of this wonderful robot,” says Ribot, speaking by phone from New York. We spoke in mid-January, the day before he and other musicians descended on the Manhattan headquarters of Warner, Universal and Sony for an “Emergency Demonstration to Stop Major Label AI Licensing.” The event was a warning shot against AI deals currently being negotiated without artists’ involvement. 

Ribot has had a highly praised solo career as well as collaborations with Tom Waits, Robert Plant, The Black Keys and many others. He has become a vocal critic of how AI systems ingest and reassemble human creativity en masse. In our conversation, he notes how once a single guitar line, drum pattern or vocal fragment is isolated, it effectively becomes potential raw material for AI and something that can be endlessly reshaped and recombined into new recordings without the musician who created it even knowing about it. 

“What’s potentially tragic on a cultural level and harmful on a social level is that all those tens of thousands of musicians who are out there imitating their hero,” Ribot says. “And all the mistakes they made — well, the name for all those mistakes is … our culture.” 

While Ribot describes the cost to culture, Dr. David Arditi’s scholarship addresses the economic logic that makes these systems tick. A sociologist at the University of Texas at Arlington and the director of the school’s Center for Theory, he has spent his career examining how digital platforms have reorganized labor and power in the music business.

“The benefit to industry is they can cut labor out,” Arditi says. “So to me, these are exploitation machines.” 

Figures sounding the loudest alarms about AI’s threat to culture and labor include Geoffrey Hinton, the “godfather of deep learning,” who helped pave the way for generative systems and large-scale models. Since 2023, Hinton has issued urgent warnings about the technologies he helped build. His about-face gives this particular moment a chilling urgency because — well, if the guy who helped make this stuff is worried sick about it, what does this mean for everyone else? 

John Strohm, a Nashville-based rights attorney, former president of Rounder Records and once a guitarist (and occasional drummer) for The Lemonheads, describes this era as “a crisis about creation.” Increasingly, he sees it as an escalating battle between corporations and those doing music business on an independent, DIY level. 

Companies training AI models, Strohm says, have largely converged on the same justification, which is that ingesting copyrighted material without permission is perfectly OK because it constitutes fair use. But as many IP lawyers point out, fair use was meant to protect socially valuable activities such as criticism, commentary, news reporting and teaching — not to provide legal cover for hoovering up the world’s creative musical output and then weaponizing it against the very people who made it.

Several high-profile lawsuits are beginning to define the legal boundaries around generative AI. One of the most consequential was filed by Universal Music Group, ABKCO and Concord, alleging that Anthropic unlawfully copied and reproduced copyrighted song lyrics while training and operating its Claude models. The suit argues that Anthropic ingested vast quantities of protected music publishing data without permission, then reproduced that material verbatim when prompted — a potential copyright violation at both the training and output stages. 

Together these suits argue that AI companies systematically ingested copyrighted literature to build their models — without permission, compensation or transparency — and that such mass copying is not automatically shielded by fair-use doctrine. None of these cases involves sound recordings yet, but legal experts say their outcomes will heavily influence whether AI companies may continue to ingest large catalogs of recorded music — and whether musicians will have any meaningful recourse when their creative labor becomes part of a training dataset. Separate lawsuits filed by major labels against AI music services like Suno and Udio do target sound recordings directly, but the book and lyrics cases above remain the clearest indicators of how courts are beginning to define the legal boundaries around text- and publishing-side training. (Disclosure: I was unaware that Anthropic had ingested my book The Narcotic Farm — published by Abrams and republished in 2021 by a university press — until a fellow author alerted me. I have since joined the class-action suit.)

Strohm, along with many musicians and writers, points out that copyright law is more restrictive than the pro-AI argument suggests. To begin with, fair use rests on four considerations, one of which is whether harm is being inflicted on creators. Strohm says that such harm is already visible within streaming, where services such as Spotify divide a fixed pool of revenue among an ever-growing number of tracks, and that each additional upload diminishes the value of every other stream, already understood to be a paltry sum.  

As he explains it, when AI-generated music is produced and uploaded at industrial scale, it doesn’t merely compete with human artists aesthetically — it dilutes the royalty pool. Spotify recently announced that it removed more than 75 million “spammy” tracks last year, a crackdown the company tied to the arrival of generative AI tools and widespread abuse of its upload system.

The growing fear and resentment toward AI practices among working Nashville musicians has been palpable, and it’s only intensifying. Jerry Roe, a veteran Nashville drummer, has just stepped out of a session for a major label project when the Scene reaches him by phone. He traces the frustration to what he sees as the industry’s loving embrace of whatever new cost-saving technology comes along, even if it may hurt the very people who built this city’s sound. 

“Tech’s whole deal is to just move fast, break things and disrupt shit without thinking about what the consequences are,” Roe says. “They’ve taken over all our means of distribution and homogenized and monopolized all of the cash flow. It’s the worst industry that’s ever existed.” 

Karen Hao is a Hong Kong-based journalist who has spent years chronicling the culture that built modern artificial intelligence. She points out that these systems were not handed down by singular geniuses so much as assembled by a small circle of men who found themselves making civilization-scale decisions without democratic oversight. An MIT graduate who was senior editor at MIT Technology Review and the author of the 2025 book Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, Hao went to school with current top dogs inside the AI industry. She has reported on the internal dynamics of OpenAI, Google and Meta, documenting how the extraordinary power of the technology became concentrated in the hands of largely unremarkable men. In Empire of AI, she narrows the view into a single and potent idea: “Over the years, I’ve found only one metaphor that encapsulates the nature of what these AI power players are: empires. … In the simplest terms, empires amassed extraordinary riches across space and time, through imposing a colonial world order, at great expense to everyone else.” 

Victoria Banks is living the reality of someone whose craft is now being automated by what Hao calls an empire. A veteran Nashville songwriter with hits for Sara Evans and cuts on Mickey Guyton’s Grammy-nominated Remember Her Name, Banks, who also teaches songwriting at Belmont University, is feeling the ground shift beneath the profession that has sustained her since the 1990s. Of particular concern to her — as it is to many songwriters — is Suno, a generative-AI music platform that allows users to input a simple text prompt and create a fully produced track with lyrics, vocals and instrumentation. 

“On one hand, it’s extremely inspiring and exciting,” Banks says, explaining that for some artists it can be deeply effective in fleshing out song ideas. However, using the system has potential effects that stretch far beyond providing a prompt and receiving music in return. “It scares me to death putting anything in there. Every time you’re putting a song into Suno, you’re feeding [its ability] to learn how to do what you do.” 

Her anxiety centers on the existential threat that this technology means for the nonperforming songwriter — the writers whose craft lives almost entirely on the page and in the melody. 

“That’s my world,” she explains. “What a nonperforming songwriter can do with a pen is comparable to what Picasso could do with a brush. It’s lifetimes of honing words, lifetimes of honing melodies. That craft is exactly what’s feeding these models. They’re learning how to do it instead of us — and that’s what makes me sad.” 

Scot Sherrod, a Nashville music publisher and longtime Music Row executive who has a professional relationship with Banks, widens the frame. “Where will the individualism come from?” Sherrod asks. “I feel like we are creatively cannibalizing ourselves right now.” 

Also a musician, Sherrod entered the business in the mid-1990s and has guided writers through every major disruption of the modern music economy, from home studios to the collapse of physical sales to the rise of streaming. Generative AI, he believes, smothers them all. 

“Nobody even understands how disruptive this thing is going to be,” he says. “Not even the CEO of Sony or Universal. They don’t know.” 

If Sherrod sketches a future in which music risks collapsing into a feedback loop of its own inputs, Charles Alexander offers at least one potential brake on that slide. A Nashville-based technologist and digital strategist, as well as an adjunct professor teaching about music and AI at Middle Tennessee State University, Alexander comes to the debate as a songwriter as well. His company ViNIL is developing verification tools to help authenticate and protect music and voices in an AI-saturated media ecosystem. 

“Our perspective is that, ‘This technology is here,’” Alexander says. “And how we are choosing to address it is to preemptively … authorize and authenticate the content.” He argues for fingerprinting your audio, a method that detects what is already present in the waveforms. The distinction is certainly technical, not legal. But in a near future when AI-generated tracks can be uploaded by the tens of millions, it could spell the difference between attribution and oblivion. 

ViNIL’s push to safeguard attribution in an increasingly synthetic ecosystem addresses one part of the problem. But the questions Ribot raises live on a different layer entirely. To him, the value of music has never been its polish, but the accumulation of tiny human imperfections — mistakes, hesitations and idiosyncrasies passed down through generations of players. Strip those away and you don’t just change the sound. You change the lineage. 

Looming over all of this is how laws have not caught up to the reality they’re meant to govern. Courts have yet to rule definitively on whether training generative AI systems on copyrighted music without permission is lawful, leaving creators in limbo as major companies press ahead under aggressive interpretations of fair use. Lawsuits challenging those assumptions are moving through the courts. But they move at a human pace — while the technology runs along at machine speed. 

In this gap, consequential decisions about how our culture is made, owned and monetized are being made by companies acting brazenly and asking questions later. The result is a moment of uncertainty where musicians, songwriters and technologists are trying to figure out whether the future of their craft will be something built by humans or stitched together by machines trained on everything those humans ever made. 

“Culture serves a function,” says Ribot. “People will miss it when it’s gone.”

Like what you read?


Click here to become a member of the Scene !