This is just one action in a coming conflict. It will be interesting to see how this shakes out. Does the record industry win and digital likenesses become outlawed, even taboo? Or does voice, appearance etc just become another sets of rights that musicians will have to negotiate during a record deal?

  • artificial_unintelligence@programming.dev
    link
    fedilink
    English
    arrow-up
    30
    ·
    1 year ago

    This will definitely be setting some precedent on how AI music is treated. I’m on the side of the monkey with a camera and that anything made by these large models is public domain. I’m sure these record companies would be ecstatic if they could license an artists voice without having to have them sing anything new

    • Catsrules@lemmy.ml
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      Hopefully that is how it goes down. That precedent has already been set for images at least for text generated images.

      Unfortunately the music industry has alot of money to throw at lawyers and i could seen an argument that this is a little bit different if your directly using someone’s likeness like a voice.

  • HexDecimal@programming.dev
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 year ago

    Corporate middlemen on AI model generated content: “When we do it, it’s okay! But when you do it, it’s stealing!”

    This genie can’t be put back in the bottle and what they wished for has became a monkeys paw for the media monopolies who thought they could replace all their artists with an unpaid robot. They’ll try to update the laws to stop this but it’s already too late.

  • Aaron@beehaw.org
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    I wonder if these battles will shake loose the circuit split on de minimis exceptions to music samples (see https://lawreview.richmond.edu/2022/06/10/a-music-industry-circuit-split-the-de-minimis-exception-in-digital-sampling/).

    Currently, it is absolutely not “cut and dried” whether the use of any given sample should be permitted. Most musicians are erring on the side of “clear everything,” but does an AI-generated “simulacrum” qualify as “sampling”?

    What’s on trial here is basically “what characteristic(s) of an artist’s work do they own?” If you write a song, you can “own” whatever is written down (melody, lyrics, etc.) If you perform a song, you can own the performance (recordings thereof, etc.) Things start to get pretty vague when we start talking about “I own the sound of my voice.”

    I think it’s accepted that it’s legal for an impersonator to make a living doing TikToks pretending to be Tom Cruise. Tom Cruise can’t really sue them saying “he sounds like me.” But is it different if a computer does it? It may very well be.

    It’s going to be a pretty rough few years in copyright litigation. Buckle up.

    • TheTrueLinuxDev@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      What more, if they over-litigate, then the economy of the country that over-litigate will fall behind compared to the rest of the world as other country would overtake USA. There is no if or but in this scenario. For instance, poor people in third world country would absolutely leverage this technologies to boost their ability to make an income.

  • Fubarberry@lemmy.fmhy.ml
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    A lot of the AI stuff is a Pandora’s box situation. The box is already open, there’s no closing it back. AI art, AI music, and AI movies will become increasingly high quality and widespread.

    The biggest thing we still have a chance to influence with it is whether it’s something that individuals have access to or if it becomes another field dominated by the same tech giants that already own everything. An example is people being against stable diffusion because it’s trained by individuals on internet images, but then being ok with a company like Adobe doing it because they snuck a line into their ToS that they can train AI off of anything uploaded to their creative cloud.

    • RandoCalrandian@kbin.social
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      1 year ago

      whether it’s something that individuals have access to

      No we don’t. That’s the box being opened.

      Here’s a leaked google internal memo telling them as such: https://www.semianalysis.com/p/google-we-have-no-moat-and-neither

      tl;dr: The open source community has accomplished more in a month of Meta’s AI weights being released than everything we have, and shows no signs of slowing down. We have no secret sauce, no way to prevent anyone from setting up their own, and the opensource community already has almost-GPT equivalents running on old laptops and they’re targeting the model running directly on the phone, making our expensive single ai solutions entirely obsolete.

      Edit:

      In addition, these corporations only have AI in the first place by stealing/scraping data from regular people and the open source community. Individuals should not feel obligated to honor any rule or directive that these technologies be owned and operated by only big players.

      • greenskye@beehaw.org
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        The only advantage corporations could have had came from having the money to throw at extremely high quality training data. The fact that they cheaped out and just used whatever they could find on the internet (or paid a vendor, who just used AI to generate AI training data) has definitely contributed to the lack of any differentiating advantage.

  • ryan@the.coolest.zone
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    I mean, the issue the RIAA is raising does not seem to be on AI training, but piracy:

    The RIAA has asked Discord to shut down a server called “AI Hub,” alleging that its 145,000 or so members share and distribute copyrighted music: Shakira’s “Whenever, Wherever,” for instance, or Mariah Carey’s “Always Be My Baby.” These songs, and several others by the likes of Ludacris, Stevie Wonder, and Ariana Grande, were named in the RIAA’s June 14 subpoena to Discord (pdf).

    The music files were being used as datasets to train AI voice generators, which could then churn out deepfake tracks in the styles of these singers.

    Later in the article:

    It wasn’t clear, from the RIAA’s letters, whether the body was complaining about the databases of original music or about the AI tracks being generated out of them.

    Like, I’m sure they’re spooked by AI generated tracks and losing control of the industry… but this seems like a pretty clear cut case of shutting down a Discord server engaged in music piracy.