
The Music Business Buddy
A podcast that aims to educate and inspire music creators in their quest to achieving their goals by gaining a greater understanding of the business of music. A new episode is released each Wednesday and aims to offer clarity and insight into a range of subjects across the music industry. The series includes soundbites and interviews with guests from all over the world together with commentary and clarity on a range of topics. The podcast is hosted by award winning music industry professional Jonny Amos.
Jonny Amos is the author of The Music Business for Music Creators (Routledge/ Focal Press, 2024). He is also a music producer with credits on a range of major and independent labels, a songwriter with chart success in Europe and Asia, a senior lecturer at BIMM University UK, a music industry consultant and an artist manager.
www.jonnyamos.com
The Music Business Buddy
Episode 60: An Assessment of The Current State of AI
Artificial intelligence has revolutionized how we discover, create, and consume music—but where exactly does it stand in 2025? After interviewing three leading AI music technology founders, I'm pulling back the curtain on the current state of music AI and its ethical evolution.
The landscape has shifted dramatically. Today's most innovative companies are building AI tools with fundamentally different values: enhancing human creativity rather than replacing it, compensating artists fairly, and respecting intellectual property rights.
From DAACI's musician-trained tools that function as creative co-pilots to VoiceSwap's groundbreaking marketplace where vocalists monetize their AI voice models on their own terms, we're witnessing the emergence of a more ethical ecosystem. RoEx Audio demonstrates how AI can handle the tedious 90% of mixing work while preserving the creative 10% that makes music uniquely human. These developments reveal AI's most valuable role in music creation: not as a replacement for human artistry, but as a time-saving assistant that handles repetitive technical tasks.
The distinction between AI voice models and real human performances is becoming a critical consideration for artists, with forward-thinking companies beginning to establish clearer frameworks for rights and compensation. By processing information through databases containing millions of audio fingerprints, these platforms ensure no copyrighted material is used without permission—addressing one of the industry's most significant concerns.
Whether you're excited about these tools or approaching them with caution, understanding their true capabilities and limitations is essential for navigating today's music business landscape. What ethical considerations matter most to you as we continue this technological journey? How might these tools transform your creative process without compromising your artistic integrity?
Websites
www.jonnyamos.com
https://themusicbusinessbuddy.buzzsprout.com
Instagram
https://www.instagram.com/themusicbusinessbuddypodcast/
https://www.instagram.com/jonny_amos/
Email
jonnyamos@me.com
The Music Business Buddy. The Music Business Buddy Hello everybody and a very, very warm welcome to you. You're listening to the Music Business Buddy with me, Johnny Amos, podcasting out of Birmingham in England. I'm the author of the book the Music Business for Music Creators, available in hardback, paperback and ebook format. I'm a music creator as a writer, producer with a variety of credits. I'm a consultant, an artist manager and a senior lecturer in both music business and music creation. Wherever you are and whatever you do, consider yourself welcome to this podcast and to a part of the community around it. I'm here to try and educate and inspire music creators from all over the world in their quest to achieving their goals by gaining a greater understanding of the business of music.
Speaker 1:Okay, so in this week's episode, I'm going to take a look at the current state of AI in music. It's very difficult to actually summarise this on a whole on any form of conclusive level, because it's an ongoing thing. It changes all the time, but over the last 10 episodes three of those episodes I've spent interviewing Createc and AI tech startup specialists that build tools for music creators. I'm going to look at some of the clips of the things that they said. Build some inferences, together with some other information, and give you an overview of exactly where AI is at in 2025. Are you ready? Here we go, Okay, so we have to start, of course, by thinking about music discovery and recommendation right.
Speaker 1:Personalised recommendations from streaming services that use AI algorithms to analyse listener habits and make recommended music choices tailored to individual preferences. You know Spotify probably do that better than anybody else, probably followed them by YouTube, but in any case, that we forget about it as being AI. Right, but that is very much AI. There's also a very, very in-depth song analysis that AI can bring. There's also a very, very in-depth song analysis that AI can bring. Things that can analyze musical characteristics like tempo, key genre, speech ability, dance ability provide very, very valuable insights for both artists and listeners and also music users. Music discovery kind of follows on from that. Right. Ai can really really help discover new music by identifying trends, analyzing popularity, similar artists, things like that.
Speaker 1:Ai is also used on a scouting level in order to find acts that are beginning to gain traction online. There's a lot of algorithms out there that are subscriber-based from record companies, from A&Rs, from managers, etc. That they use to track talent Very often. If you look at some of the startups that create that kind of technology, they're often quickly swallowed up by major conglomerates, so that only that company can access that information. It makes perfect sense this is a business, after all. But the point is this AI is used on a reactive and predictive level in order to try and make informed judgments as to when or if an artist will break, because this then informs investors, record companies etc. As to whether they want to invest in an artist.
Speaker 1:Ai is, of course, used on a creative level when it comes to composition, arrangement, production, production, sound replication. Ai can recreate the sounds of instruments and voices, allowing for the creation of unique soundscapes or the replication of specific sounds or voices or instruments. Ai can also generate backing tracks that adapt in real time to a musician's performance, creating dynamic and live experiences, etc. Also, it comes when it comes to inspiration and idea generation. Now, this is not without difficulties, of course. Right, there are huge issues around copyright and ownership. Ai-generated music raises questions about copyright and ownership and the originality of AI created works, especially when AI is trained on existing music.
Speaker 1:I've mentioned this before, guys, but I'll mention this again. Just imagine analysing 20 Coldplay songs and using that as input to create the output, to create your own Coldplay style song. It could then be argued well, Coldplay didn't write it. Well, if it hadn't been for those other 20 songs, that song wouldn't exist. And that there lies the problem that has dominated the agenda in the music industry over the last two years Now. Combine that with the kind of, if you will, public worry about AI replacing humans, and together you've got a perfect storm of ultimate fear of what is coming next.
Speaker 1:Now it is that together that can sometimes create an ill-informed judgment as to what is actually happening. So I thought well, I need to become a little bit more informed about this myself. Of course I do, Everybody does, because all this stuff is constantly changing around us. So I interviewed three leading experts in the AI world that are building tools for music creators, and I wanted to look at how ethically they were doing that. What were their systems being trained upon? So let me just go over a few key highlights from some of the interviews that I've done over the last few weeks to show you a little bit more of an insight so that you can create your own view. Okay, so when I interviewed Anne-Marie Gaylard from Darcy, I asked her what was the kind of the overarching purpose and goal of what Darcy is trying to build. And this is what she said.
Speaker 2:Just basically making tools that enhance and amplify the creative process but never replace artists.
Speaker 1:I really, really like Anne-Marie's answer there. It's the kind of the answer we all want to hear as music creators and those that care about the process of making music. But in order to dig into that a little bit further, we have to kind of understand what that actually looks like in terms of what they're building and what it caters for. So I asked her a little bit more about the process of making music and this is what she said I think we always have to think about you know what makes music, how do we make music?
Speaker 2:And I think we as humans, we write from a very human place, from our soul, from our life experiences, you know what, when we've been happy times, when we've been sad. I don't believe any tech can ever or should ever replace that. But I think AI, done in the right way, in the ethical way, can just help us get to places a bit faster.
Speaker 1:OK, get in places a little faster. That's one of the common themes here in some of the interviews that I've done, and that's really, really good, isn't it? If something can save us time, then it's got to be worthwhile. In fact, actually that's a common pattern amongst many AI tools, isn't it? But let's just pause here for a minute and think about a timeline. Right, we can actually put a fairly close timeline together here on the things that have happened so far in AI and music in this decade. In fact, we can actually pinpoint it too. Let's go to December 2023.
Speaker 1:That is when Suno became accessible on a mainstream level and it actually, I think a lot of people kind of made up their mind about AI music, perhaps based upon Suno, because it was incorporated into Copilot and, you know, with Microsoft. So it became something that was at the fingertips for anybody and someone could build a track very, very, very quickly. But if you really kind of analyse it, you can kind of go oh, actually I recognise that little bit there because it's taken from this song over here and there was no permission behind it. It's very, very, very different, In fact, light years away from what Darcy have built, because it's not trained on data of copywritten material. It's trained by musicians so it can be reactive in the same way as another musician around you can.
Speaker 2:And sometimes you just need a bit of inspiration, and I think these tools can kind of just help generate, get the brain going a bit and, you know, generate some new ideas. And certainly that's what we're doing with our tools at Darcy, so and there are other tools out there that are doing similar things. So I think, yeah, done in the right way, these.
Speaker 1:these are powerful tools that are there just to kind of co-pilot with us, okay, I like that right, something that can co-pilot with us, right, video makers have tools that can help them to edit something faster or to change the shape, the background, the texture of something. So in many ways it's kind of not really too dissimilar from that Making music. Of course, it can feel, and often is a very, very personal process, and so therefore, to inject something robotic into it can feel immoral. It reminds me of that old kind of you know Jeff Goldblum quote from Jurassic Park all those years ago, the, that old kind of you know Jeff Goldblum quote from Jurassic Park all those years ago, the movie where it said you know, just because we could doesn't mean that we should. You know.
Speaker 1:It kind of made me think about that when I was interviewing some of these people, because you kind of go, yeah, we can build something that can do this and this and this over here, but should we? Well, that's another moral question, isn't it? And that's a question for you and you only to answer. By the way, the purpose of this right now is to just shed a little bit more light on it, and I know that people are scared about this, and so does Declan McGlynn, chief creative officer of VoiceSwap. This is what he said about this.
Speaker 3:Right now, there's a lot of fear and a lot of trepidation about AI, much of which is justified.
Speaker 1:It is justified, of course. Yeah, Declan and I both agreed on that, as I'm sure you would too. I think, though, one of the things that makes us petrified are some of the things that have happened sort of you know, a year, two years ago. I know you know the grand old scheme of things. That's not that long ago, but in the AI world, that was like a lifetime ago, because so much has changed. One of the key things is the clampdown on big companies training their models on old copywritten material. I boldly predict by the end of this decade, that will start to clean itself up, and actually, what we're about to look at next is a good example of that, which is the idea of remunerating vocalists and also musicians moving forward on their AI model.
Speaker 1:Now, bear with me on this one, guys. One of the things that has come up recently is this when an artist, a performing artist, signs the rights of their recorded music to a record company, to a distributor, to whoever moving forward, should they be signing away the use of their AI voice model to go with it? I don't have the answer for that, and I'm not sure anyone does right now, but it's certainly a factor to consider Now. Traditionally, we know that intellectual property in the music industry is very often something that is collected on a retrospective level. It's been like that for decades. It's fine. However, in the AI world, it would be good to be able to set a barrier in which somebody wants to use somebody else's AI voice model and they have to pay for it there, and then I was hoping that we would see that at some point, and thankfully it's something that voice swap actually do. Let me show you a little bit more from Declan on this.
Speaker 3:Well, this is the point of inference, essentially, and we kind of thought that was the right thing to do. Anyway, like you said, we want to make sure that, even if the song's never released, even if it's just for demo purposes or soundwriting purposes or ideation purposes, the artist is still getting compensated because, at the end of the day, you're using their voice and that's their IP.
Speaker 1:It is their IP. It's a very, very good point, by the way. How refreshing is it that somebody so senior in such an impressive tech company acknowledges that I mean Declan amongst many people. This is one of the common denominators amongst the guests that I interviewed on this subject is how much they care about this. It really, really, really comes through. It's a very, very different culture than, let's say, for example, a large company like Suno doing things. Very, very different culture than, let's say, for example, a large company like Suno doing things very, very differently. They all kind of fall under the bracket of AI, but the reality is they are light years apart in their values. Okay, so I mentioned about the kind of the futuristic angle of AI model versus real voice model. Let's go into that a little bit deeper, because Declan has some fascinating insights on that. But everything is underpinned by law, isn't it right? And we're in a state now where the standards of where this stuff is a little bit woolly. Let's see what Declan says about that.
Speaker 3:Standards are still being set in the age of AI and rights and the creative industries creator tools. We wanted to set a standard that says yes, there is a value in converting the voice okay, now let's just think about that moment where the voice is converted.
Speaker 1:Let's just assume that there's some kind of technical, smart alex out there that have got ahead of the game. Right, that go. Okay, let me use this over here and sample this and then pipe that into there and then I'll pass that off as a ai voice model. What would a company like VoiceSwap do with that? This is what Declan said.
Speaker 3:All the acapellas and all the training data gets uploaded to user model training. It's screened by BMAT to make sure there's no copyright content inside of it, because they have a database of 180 million fingerprints of audio.
Speaker 1:Okay, pretty impressive and also very reassuring, which is good, but what role does this have on the marketplace? Because the lines get blurry between the real person and the AI voice model of that person. By the way, guys, that is a terminology that I think we're going to have to start getting used to going forward, however odd it may sound the idea of a voice model, an AI voice model, versus the actual real person right, because they should be two different price points, and this is something that VoiceSwap had already thought about. Have a listen to this.
Speaker 3:It means that we can go ahead and build a marketplace where anyone can come and monetize their voice and they can set their own terms.
Speaker 3:They can set their own price point, they can set their own licensing point if they want to have licensing at all, and eventually you can imagine having a section on VoiceFab where it's the marketplace. You go to the marketplace, you type in you know jazz vocalist, female, spanish, whatever and you get you know a hundred responses of those voices from around the world. You can work with them passively. You can either use their AI voice model. You can upload your track without the stems, just upload the full thing. We'll use stem separation technology to split, we'll swap out the acapella with the model you've chosen and then you'll hear your idea in their voice before you commit to working with them. And then you can either pay to use their AI voice or you can contact them. And then you can either pay to use their ai voice or you can contact them. Say, okay, cool, this sounds good, let's get that recorded for real in the studio okay, now that's a.
Speaker 1:That's fascinating, isn't it? I hope you're still with me on this. There's so much to think about here, I know, but these things are changing so quickly, so it's really good we stay on top of this, right? Let's just roll back a little bit. A few weeks ago, I did um, an episode about the remote marketplace and where that's at what declan just referred to there is kind of really the next incarnation of it, and it might not take very long to get there, considering that suno, to many people now feels kind of like old news in a way. Um, and that was only what less than two years ago. So it's 2025 now. By 2027, this will have evolved even further, and that was a little snapshot indication as to where it might be headed.
Speaker 1:Okay, let's move over to the realm of music production and, in particular, if you think about music production, there's so many different processes to it, isn't it? Sound design, sound selection, arrangement, all sorts of stuff. Let's just hone in on the mixing side of it, because I interviewed dave ronan, ceo of row x audio, and about a particular model, uh, that he and his team have built. He comes from a very, very, very impressive um ai background and he's built a kind of um, a reactive ai piece, which is something which can mix your song for you when you send it the stems. This is what he said about it in a nutshell.
Speaker 4:For five quid. Basically, we can take the problems that we found, we can fix them to the best of our ability, and then we can also remix and master your track as well. So if you have stuff from like I don't know, that was for me, it was perfect. I had stuff from 20 years ago that I made when I didn't really have a clue what I was doing and I was able to kind of like essentially remix them and remaster and they sound a lot better now. So yeah, so that's that's kind of.
Speaker 1:That's essentially mix check studio okay, so that's mix check studio, right, there's also a free version before it. By the way, I'm not advertising this product. They don't pay me to do this. I was trying to give you an honest assessment as to where things are at. The free version will tell you, kind of like, what's wrong, let's say with your, with your mix, and then the mix checker pro will then kind of just fix it for you.
Speaker 1:Now the worry with that perhaps is in well, you know, are we taking work away from mix engineers? And that's something that I put to dave, and actually it's very interesting, because he kind of talked about the idea of kind of leaving that final 10 in a mix as the creative part, the bit that you really really kind of put your stamp on, and they kind of got me thinking. I've mixed a lot of records, right, and actually the the first part, the time consuming part of mixing the record is actually the really really kind of inane, really rather boring stuff leveling things out, getting gain, staging right, all that kind of stuff. It's the stuff that no one ever hears or no one really ever wants to hear about, because it's often largely boring to anyone that's not interested in it. But the point is this the ai does all that stuff, not the cool individual stuff. So I asked him about that and this is what he said.
Speaker 4:And you have a mix there ready. So you just basically need to tweak stuff if you want, or else you add your creative stuff on top.
Speaker 1:Okay, that was a penny drop moment for me. This is a twofold purpose tool. Right, this is for those people that say, do you know what, I can't afford a mix engineer and I can't get my songs to that next level. Right, I'm going to use this tool to help me to get there. Right, boom, that's one type of person. But here's the real head scratcher, everybody. Right, there's another market here, right, and that is for people that actually do mix. This is not there to replace them. It's for them to save time in their work so they don't have to do that mix prep. The final 10 of the mix is really often the bit that you really put that kind of creative, unique stamp into it and they don't do that. And they acknowledge that they don't do that.
Speaker 1:Okay, another area of music production following on from that is integration into doors. So if anybody's not familiar with the term door, da w, digital audio workstation it's the software that's kind of like the virtual console and editing screen of where you record music in and where you mix it. Now, what row x have built works independently from that. It works outside that, and what dave actually recognizes is that once it's actually built inside of the door. It will give the company a huge advantage because it will actually make it a much more streamlined workflow. Or in other words, in English, what I really mean is, if you can do this inside of your door and you don't have to go outside of it, you'll actually be more inclined to use it. So this is what they have done about that.
Speaker 4:We've done that now for Bitwig, presonus, studio One and Cubase as well, and we're expanding it to as many dollars as possible, really, I didn't know about that. And the beauty of that is because it's fully then at that point you can just balance it out if you're happy with it, or you can tweak it, so it's very assistive at that point. You know what I mean. It's getting you to the 90% point, and then you can do what you want on top of that. Basically, Okay.
Speaker 1:So one of the things that we can see so far from some of the tools that are being built for music creators is actually, they are time-saving tools and if we just leap outside of music for a minute, that's one of the common patterns amongst AI tools. Right is doing stuff that speeds up this process. Guys, I'll give you an example of that right. So for this podcast, when you see the show notes and the social media blogs and the posts and all the words that go into that, right, that's not me writing it. That's AI. It's a thing called co-host AI. Again, this is not an advert, but I'm using AI to summarize all the things that have been said in the podcast, and then they can kind of just take the main points and put them into copy right that can be then used. That's AI. I mean, it doesn't feel immoral to me to want to use that tool, quite simply because you know it saves me about two hours each week to do that. So this is comparable to that and that's one of the patterns that we're seeing here. Another example of this in the music production world would be something like iZotope's RX tools. Now, these are tools that they've been using for many, many years in post-production suites. So, for example, when you watch a television program or a film and you look at dialogue and you go, oh, they've piped this into the background or they've taken that out the background. In theory we shouldn't be noticing those things, but the reason that they sound so good is often because of the editing work that goes into dialogue, into voice. And it's the same with music. Why am I telling you that? Well, we can be very restorative now with certain things. You could record something on your phone whilst you're outside in the wind and you could throw it into an rx tool or a piece of software and it'll fix it. It'll take out the background and it'll do that by compressing this or doing that, whatever all those little parameters that make up all the sort of geekiness, if you like, of those things. They don't have to be done painstakingly for three hours by a human, however noble that is. I'm not entirely sure how much fun that is. Maybe it is for some people, but I'm a producer. I can tell you it's no fun, right? If I've got a tool that can do that in under a minute and it doesn't really cost very much, I'm going to do it right, that's another example for you. Okay, so there's a little overview. Guys, I wanted to share that with you.
Speaker 1:Not at any point am I ever trying to force any kind of soft narrative on the way that you think. My goodness me, we live in a free world. You can make your own mind up, but what I would say is this take an informed viewpoint, because some of the things that grab the headlines on the subject of ai and music creator tools are things from two years ago, from a year ago, and if we're not careful, we can make up our mind on a subject and just go oh, that's my understanding of that and actually we kind of maybe didn't get it right. What's actually been done now amongst some of the companies out there that are building for music is actually very, very ethical, very fair and very remunerative, which means it's actually creating a new aspect of the ecosystem that is music creation. Anybody that I've spoken to on this podcast so far on this subject nobody by their own admission or by their actions, are doing anything to replace humans, but just merely to assist humans, and that's something that has come across to me.
Speaker 1:Now, of course, I'm painting a positive picture here because that's the kind of person I am, I am well aware of the more darker, more sinister aspects of AI, but I do believe that they are being softly eradicated by new standards and what they're being replaced with is a far more ethical approach to all of this stuff. Now, I've focused largely there on music creator tools. I hope in all my heart that that is the thing that's most useful to you, but of course, as I mentioned earlier in today's episode, there are other aspects of where AI is used in music on a business level, which is mirrored by how it's used in other industries. Anyway, there's a little snapshot I'll maybe do another one of these in a year, or maybe less than that on what happens next, but that's where things perhaps sit right now. Watch this space. It's going to change. It always does, especially in this world. Okay, until next time, everybody. Have a great day, and may the force be with you.