Music has changed a great deal through the ages, from the classical music of icons like Mozart and Beethoven to today’s modern electronic music that makes the ground vibrate at new-age parties and festivals alike.

If there’s one thing we can rely on, it’s the fact that everything is always changing. We may not know how, why or when, but just as quickly as new ideas come to life and styles gain traction, trends die out and people move on.

The newest addition to the modern world of music, however, is AI-generated music. Of course, artificial intelligence has already stormed into just about every other sphere of life – from mechanical automation and self-driving cars to content generation and the writing of human-like articles, poetry and just about anything else you could possibly think of.

The introduction of AI into the world of music, however, brings with it several concerns and difficulties that are common in anything that involves creativity. Indeed, writers and artists, among others, have also dealt with issues concerning creative license and most recently, discussions over copyright.

 

A Quick Overview of How AI Music Is Generated

 

So, for most people, the idea of having anything original created by a computer (well, artificial intelligence) seems almost incomprehensible, but when it comes to music, it’s just that much more difficult understand.

Basically, AI-powered systems are provided with vast datasets of music from which to learn. The systems learn the music and all the different patterns it involves, and based on what they’re provided with, they’re then able to replicate more complex patterns.

Although the music they produce is generated as a result of learning from existing music, the final product is still supposed to be unique. So, is it?

 

Why Don’t Musicians Don’t Like AI-Generated Music?

 

Ultimately, the idea is to create music that is completely unique, but there are issues with this – first and foremost, is this even possible?

Recently, there have been cases where the very idea of “unique” AI-generated music been called into question. Cases in which the AI hasn’t gone far enough to transform what it’s learnt from existing music into a brand-new track – rather, it’s actually done a bit of replication, whether that’s replicating a specific melody or even more than that.

But beyond extreme cases like that in which specific parts of songs have been replicated and are actually recognisable, a great deal of the musical community is upset by something far more simple. That is, the very fact that their music is being used to train AI systems to create something that involves no human creativity and essentially, takes opportunities away from them.

Some AI companies doing this kind of work assert that they only use music that is properly licensed, but other companies aren’t even doing that. They’re simply using whatever they can get their hands on, claiming that anything that is publicly accessible falls under “fair use”.

So, ultimately, musicians have brought forward two aspects to this debate. First, they’re upset that their music is being used to train AI models that will ultimately create new music that will make money and diminish their own market share in the industry. And second, they have a problem with certain instances of in which AI-generated music actually sounds like existing songs.

 

 

What Are the Legal Implications? 

 

As is the case with most applications of AI, everybody is kind of playing catch-up, so to speak, in terms of figuring out how to deal with these new issues that are cropping up as a result of the new technology. Ever since AI first started to become a real part of the modern world, lawmakers have been scrambling to keep up with the technology, and that’s exactly the case in the music industry.

The hard truth for musicians, as it stands, is that they’re not really protected from the actions of the AI music industry – not until licensing policies are changed or copyright laws are amended. Other than basic goodwill, there’s nothing stopping companies and individuals from creating AI-generated music using the existing work of hardworking, human musicians. Current licensing agreements don’t yet include terms regarding AI data sets.

The other thing to consider, as crazy as it may seem, is that there actually isn’t a way to detect whether specific music has been used to generate AI music. Unless you were to gain access to the data set that was used to train the AI system, musicians aren’t able to make definitive accusations regarding the use of their music.

In fact, even in cases in which parts of these songs are distinctively recognisable – whether it’s a melody or a lyric – musicians still won’t necessarily be able to conclusively establish that their music was used to train the AI systems.

The companies running the AI systems would, most likely, claim that the technology simply did what it was supposed to do – it worked with musical patterns and generated a sequence that, on average, would be most likely to occur and would be most pleasing to listeners. Essentially, the argument is that the AI just happened to have the same idea, so to speak, as the musician.

And whether or not this is true is almost irrelevant at this point, because without a proper detection system, there’s no way to tell either way.

 

A Hard Truth

 

The bad news for musicians is that at this point, there is no real way to get away from this problem. With most music available on the internet these days, there’s not much stopping individuals and companies wanting to create AI-generated music from using whatever they find online, creating “new” tracks and ultimately making money from it.

Not until copyright laws are changed to account for AI or if some kind of advanced detection tool is created to be able to accurately identify AI-generated music. And, while technology is constantly changing and improving, this seems like a pretty difficult thing to do.

For instance, AI systems have been generating written content for quite some time, and while there are oodles of different “AI detectors” available that can supposedly differentiate between human and AI-generated text, the fact of the matter is that they’re just not accurate.

These detectors use specific parameters, as they should, in order to detect content that’s been manufactured by AI, but all human-generated content is different and AI-generated is constantly improving. So, there really is no way to accurately and definitively say whether or not any text has been written by a human or by AI.

And the same goes for music (at least for now).

The hard truth is that it’s simply not possible to properly detect AI-generated music or identify whether or not another artist’s songs have been used as part of the system’s dataset.

This means that until this changes, musicians are likely to continue to be unhappy about the potential use of their music in AI data sets. In fact, it’s possible that this will result in some big changes in the way in which the music industry operates.

Some experts predict that AI music will completely take over the industry while others believe it’s possible that musicians will simply stop posting their music on the internet. Perhaps the live music industry and physical record sales will see a resurgence, and the effect of AI on the music industry will be somewhat of a step back in time.

For now, the future of the music industry is unwritten, so we’ll just have to wait and see how it all plays out.





Source link

Share.
Leave A Reply

© 2024 The News Times UK. Designed and Owned by The News Times UK.
Exit mobile version