AI Won’t Replace Creativity. It Just Exposes Who Didn’t Have Any
I've noticed, AI has become one of those topics that splits people in half. Some say it's the future and if you don't use it; you'll fall behind. Others refuse to touch it. Some subreddits ban it completely. It's strange how divided people get over a tool. You'd think we were debating religion instead of software.
There was a phase where everything suddenly became "AI-first", every company jumping on the label, every post on LinkedIn saying you're done for if you don't use it. Because nothing says "we have a plan" like frantically rebranding your product as AI-powered. Like every hype wave, it's starting to settle. People are finally figuring out what actually fits into their workflow and what doesn't.
For me, AI has shown up in a different place, in music.
The conversation around AI in music often misses the point. People worry it will replace creativity, but that's not how it works in practice.
AI tools like this don't think. They don't have intent. They respond to what you give them. If you have a clear idea, AI can help you explore it faster. If you don't, it just generates noise.
Using AI to master audio
I’ve been experimenting with a tool that lets you upload your own tracks and have AI produce them. It doesn’t rewrite the song. It just polishes what’s already there, tighter mix, smoother tone, more balance. It feels like having a co-producer who gets what you’re trying to do.
I’ve played piano since I was eight. Writing music has always been a passion of mine. The piece I demonstrated here started on a whim, like all my tunes. I’d just bought a new MIDI keyboard and started playing, flowing where my thoughts took me. What came out was a soft, melancholic piano piece that felt simple but honest.
You can describe what you want it to sound like, cinematic, darker, dreamlike, whatever mood you’re chasing. The fact that a few words can steer the whole atmosphere still amazes me.
Here’s an example of one of my tracks:
Flying - Original track
Flying - AI-produced version
The melody, structure, and feeling are still mine. It just sounds closer to what I could hear in my head but couldn’t produce on my own.
The tool
It's called SUNO. If you want to try it yourself, here's my invite link, which includes a few free credits to start with.
While SUNO is mostly known for generating full songs from scratch, I’ve used its other feature, taking my own recordings and giving them a professional polish.
You're more than welcome to take a listen to some of my tunes, these were created from my own recordings playing piano or guitar and singing over the years: suno.com/@nova.
I don’t see this as replacing musicians. It’s more like a mastering assistant that doesn’t get tired. You still have to write, perform, and feel the music, the AI just helps bring it closer to how you imagined it.
How it works
SUNO can take a short prompt or an uploaded track and turn it into a full piece of music, vocals, instruments, everything (Wikipedia). It launched in late 2023 and quickly became one of the biggest text-to-music tools, even showing up inside Microsoft Copilot (The Verge). They don’t share exactly what data it was trained on but say it’s built with safeguards against plagiarism and copyright issues (Wikipedia).
It’s still not perfect. Sometimes you’ll hear small artefacts or distortion that remind you it’s still a machine interpreting emotion. But it’s honestly mind blowing how close it gets, taking a simple piano track and turning it into something cinematic in seconds.
The grey area
No one really knows what's inside these models. Which is convenient for the companies building them.
Suno hasn't disclosed what data they used to train it (Wikipedia), but filings suggest it was built on "pretty much every decent-quality music file available online" (Music Business Worldwide). That kicked off a lawsuit from major record labels, who claim Suno and another company, Udio, trained their systems on copyrighted music without permission (AP News). Suno’s response is that the output is still original and covered under fair use (TechCrunch).
It’s messy, and no one knows how it’ll play out. I’m just keeping an eye on it because these are the kind of things that decide how far tools like this can really go.
The reaction
What's interesting is that no one really knows it's AI unless you tell them. People hear it and think it's just a professionally mastered version. They only find out later when I explain how it was made, and that says a lot about where this tech already is.
Funny how people's opinions about AI music change when they can't actually tell the difference. Suddenly it's less about the tool and more about whether they liked what they heard.
What it means for me
Using AI in music has made me focus more on the idea than the tool. I’ve stopped chasing perfect takes and started caring more about how the song feels. It’s made me realise that creativity doesn’t have to sit in one box, you can blend human and machine input and still make something real.
It’s also made me think the same way about building products. It’s not just about the code or the tech stack. It’s about the feeling you create when someone uses what you’ve made, the same way a song makes someone feel something even if they don’t know why.