
The UK government recently made a significant decision: artists’ copyrighted material can now be used for AI learning. This sparked immediate concern from major figures in the music industry, including Sir Paul McCartney, who pointed out a troubling imbalance – artists won’t earn from the creative work being used to train these systems, yet technology companies stand to profit handsomely from the resulting AI capabilities. This debate isn’t merely about compensation; it strikes at the heart of what we value in music.
As these AI systems absorb the collective works of generations of musicians, we face profound questions about creativity, originality, and the future of human expression in music. What’s ultimately at stake is the very nature of our connection to music itself – whether we’ll continue to value the human story behind each note or shift toward consuming perfectly engineered soundscapes divorced from human experience.

From sceptic to witness: A musician’s journey.
I’ve been an amateur musician since my mid-teens, writing my own songs and experiencing firsthand the vulnerability that comes with creation. There was, and still is, a great fear in sharing music with other people. You put your heart and soul into it and then offer it up on a plate for someone to dissect, critique, and potentially kill something inside you.
In my early days, I was drawn to rock, heavy rock, and metal – genres that pride themselves on raw human energy and technical skill. When synthesisers and drum machines began appearing more prominently in metal and hard rock during the late 80s and early 90s, I was scornful. While these technologies had been established in pop and electronic music for years, their integration into traditionally “raw” guitar-driven genres felt like a betrayal.
These digital production tools seemed like shortcuts that diminished the craft within the context of metal. I didn’t think it was “real music” or that it required much skill.
I was wrong, of course. Over time, I came to appreciate how these technologies opened new possibilities rather than replacing musicianship. Artists like Nine Inch Nails’ Trent Reznor showed how technology could be wielded as expressively as any guitar. Electronic music pioneers demonstrated that programming beats required its own form of virtuosity. The technology became another tool in the musical arsenal, not a replacement for human creativity.
But generative AI presents a fundamentally different challenge.
The blurring line between influence and replication.
Today’s GenAI models do a remarkably convincing job of creating original songs – compositions that could easily be mistaken for the work of human bands. Unlike the synthesisers I once dismissed, these systems aren’t merely tools wielded by human creators; they’re generating the creative output themselves.
What makes this particularly complicated is that hardly any music is truly original these days. We’re all influenced by what’s gone before us. Most popular songs are based on 3-4 of the same chords, as Ed Sheeran once demonstrated in a live interview by seamlessly transitioning between multiple hit songs using identical chords.
This raises a provocative question: If human music is already a product of what we’ve heard and absorbed, is GenAI doing anything fundamentally different? Is its creativity inherently less valuable than ours?
The difference, perhaps, lies not in the process of combining influences, but in the lived experience behind those combinations. When a human musician draws on their influences, they’re filtering them through their own emotional landscape, their heartbreaks and triumphs, their cultural context and personal history. AI systems, however sophisticated, lack this dimension of experience.
More fundamentally, human creativity often emerges from constraint, struggle, and intention. The guitarist who develops a unique sound because of physical limitations, the songwriter who processes grief through music, the band that crafts an album as a deliberate statement against cultural norms – these all create meaning that transcends the notes themselves.
In my early twenties, I used to jam with a friend and my brother in the village hall. We were simply playing for enjoyment – a mixture of covers and originals.
Back then we recorded our sessions on cassette tapes. In one particular session, embracing our rock ‘n’ roll aspirations, we decided to bring along a couple of six-packs of lager.
During that session, we felt remarkable. We believed we’d never played so tightly before, never executed those solos with such precision that might make Eddie Van Halen momentarily jealous, and never, to be frank, brought such musical energy to our quiet village.
However, upon sober reflection and playback of the recordings, reality intruded rather harshly. The trap door on the stage of our musical aspirations opened and dropped us into darkness. The performance was dreadful – out of tune, out of time, and consequently, out of hope for any stardom. Yet amidst those ashes emerged small flickers of possibility. There were moments where lowered inhibitions led to different musical perspectives, where technical mistakes created interesting tensions in the sound. These were elements we could analyse, learn from, and deliberately incorporate into our future playing.
This raises an important question about AI-generated music: without this capacity to stumble upon innovation through imperfection, might AI deprive musicians of discovering unexpected beauty through human error? Considering a lot of great songs are written from a broken heart, could AI emulate this through a broken connection to Siri?
AI may imitate the patterns of human creativity, but it creates without purpose – without resistance to overcome or a need to express something real. What looks like creativity is actually sophisticated pattern recognition without the driving force of authentic intention.
The mathematics behind the melody.
This reminds me of a Star Trek Voyager episode where the holographic AI doctor sang before an alien race who became utterly besotted with his performances. They loved the operatic songs he performed and were inspired to write their own operas for him to sing.
When he tried to perform their compositions, the results sounded awful – to human ears, anyway. The twist was that this alien race loved the mathematics behind the music.
They wrote their operas based on numbers, based on the matrix behind the music rather than emotional expression.
This science fiction scenario suddenly feels less far-fetched. With AI-generated music, we may indeed see AI “artists” climbing the charts. Will we become like those aliens, gradually shifting our appreciation toward the perfect mathematical structures of music rather than the raw, imperfect expression of human emotion?
A new relationship with music.
The rapid advancement of AI music generation isn’t just changing how music is made – it’s changing our relationship with music itself. When any style, any sound, any emotional tone can be generated with a text prompt, what happens to the rarity and specialness of musical innovation?
This shift has already begun with algorithmic curation. Most listeners now experience music primarily through algorithmic recommendations and playlists, with streaming platforms nudging us toward certain sounds based on our listening patterns. The algorithm becomes both gatekeeper and tastemaker, reshaping our musical diet in subtle ways. AI-generated music represents the next frontier of this algorithmic relationship – not just choosing what we hear, but creating it specifically for our consumption habits.
Music risks becoming less a shared cultural experience and more a personalised product, optimised for engagement rather than expression.
There’s something precious about knowing that Black Sabbath’s heavy guitar tones emerged from Tony Iommi’s industrial accident and the makeshift thimbles he created to continue playing. Or understanding that Metallica’s aggressive sound developed through years of personal hardship and unwavering artistic vision. These human stories give the music layers of meaning beyond the notes themselves.
Will AI-generated music, devoid of these human backstories, still move us in the same way? Or will music become more like a consumer product, customised to our exact specifications but missing the beautiful accidents and limitations that often define human creativity?
Finding harmony between human and machine.
Perhaps the future isn’t as dystopian as it sometimes appears. Throughout history, new technologies have initially been met with resistance before being incorporated into the creative process. Photography didn’t kill painting – it pushed painters toward expressionism and abstraction. Drum machines didn’t replace drummers – they became another rhythmic colour in many bands’ palettes.
AI music tools may similarly find their place alongside human creativity rather than replacing it. They might become collaborators that help musicians overcome creative blocks or explore new territory. They could democratise music creation, allowing people with ideas but limited technical skills to express themselves – though expression alone is not the same as lived artistic voice.
What seems certain is that we’re entering a new chapter in music’s ongoing evolution. As these AI systems continue to develop, trained on the collective musical heritage of humanity, we’ll need to reconsider our definitions of creativity, originality, and artistic value.
I’m grateful to have experienced the raw power of bands like Black Sabbath, AC/DC, and Metallica before this transformation.
Their music stands as a testament to what happens when human hands, hearts, and histories combine to create something greater than the sum of their influences.
Whether future generations will value the human element in music as we do remains to be seen. But as long as people continue to have uniquely human experiences – love, loss, joy, and pain – there will be a place for music that speaks to those experiences in an authentically human voice.
The challenge ahead is ensuring that, amidst the perfect algorithms and flawless AI compositions, we preserve space for the beautiful imperfections and emotional authenticity that have always made music not just something we hear, but something we feel. Because in the end, what is music without the story behind it?
When all the technological debates settle, we may find ourselves returning to music’s most fundamental purpose. Perhaps the final measure of music’s value won’t be how well it sounds, but how much it reminds us that we’re still alive.

Nathan Green – Founder
Dedicated to inspiring passion and purpose through innovative software solutions, empowering businesses and individuals to overcome challenges and reach their fullest potential.
Connect with Nathan on LinkedIn