The American Matthew Butterick has started a legal crusade against generative artificial intelligence (AI). In 2022, he filed the first lawsuit in the history of this field against Microsoft, one of the companies that develop these types of tools (GitHub Copilot). Today, he’s coordinating four class action lawsuits that bring together complaints filed by programmers, artists and writers.

If successful, he could force the companies responsible for applications such as ChatGPT or Midjourney to compensate thousands of creators. They may even have to retire their algorithms and retrain them with databases that don’t infringe on intellectual property rights.

  • realharo@lemm.ee
    link
    fedilink
    arrow-up
    13
    ·
    8 months ago

    I don’t see the US restricting AI development. No matter what is morally right or wrong, this is strategically important, and they won’t kneecap themselves in the global competition.

      • WebTheWitted@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        8 months ago

        Great power competition / military industrial complex . AI is a pretty vague term, but practically it could be used to describe drone swarming technology, cyber warfare, etc.

        • anachronist@midwest.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          LLM-based chatbot and image generators are the types of “AI” that rely on stealing people’s intellectual property. I’m struggling to see how that applies to “drone swarming technology.” The only obvious use case is in the generation of propaganda.

          • S13Ni@lemmy.studio
            link
            fedilink
            arrow-up
            2
            ·
            8 months ago

            You could use LLM like AI to go through vast amounts of combat data to make sense of it on the field and analyze data from mass surveillance. I doubt they need much more excuses.

            Case could be made tech bros have overhyped the importance of AI to military industrial complex but it nevertheless has plenty of nasty uses.

          • maynarkh@feddit.nl
            link
            fedilink
            arrow-up
            1
            ·
            8 months ago

            The only obvious use case is in the generation of propaganda.

            It is indeed. I would guess that’s the game, and is already happening.

    • frog 🐸@beehaw.org
      link
      fedilink
      English
      arrow-up
      23
      ·
      8 months ago

      It’s worth remembering that the Luddites were not against technology. They were against technology that replaced workers, without compensating them for the loss, so the owners of the technology could profit.

      • luciole (he/him)@beehaw.org
        link
        fedilink
        arrow-up
        14
        ·
        8 months ago

        Moreover, Luddites were opposed to the replacement of independent at-home workers by oppressed factory child labourers. Much like OpenAI aims to replace creative professionals by an army of precarious poorly paid microworkers.

        • frog 🐸@beehaw.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          Yep! And it’s not like a lot of creative professionals are paid all that well right now. The tech and finance industries do not value creatives.

            • frog 🐸@beehaw.org
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              8 months ago

              Obviously I can’t speak for all countries, but in mine, an artist and a programmer with the same years of experience working for the same company will not be getting the same salary, despite the fact that neither could do the other’s job. One of those salaries will be slightly above minimum wage (which is currently lower than the wage needed to cover the cost of living), and the other will be around double the national average wage. So there are in fact artists using food banks right now, and it’s not because the creatives aren’t working as hard as the tech professionals. One is simply valued higher than the other.

      • Even_Adder@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        11
        ·
        8 months ago

        Their problem was that they smashed too many looms and not enough capitalists. AI training isn’t just for big corporations. We shouldn’t applaud people that put up barriers that will make it prohibitively expensive to for regular people to keep up. This will only help the rich and give corporations control over a public technology.

        • frog 🐸@beehaw.org
          link
          fedilink
          English
          arrow-up
          3
          ·
          8 months ago

          It should be prohibitively expensive for anyone to steal from regular people, whether it’s big companies or other regular people. I’m not more enthusiastic about the idea of people stealing from artists to create open source AIs than I am when corporations do it. For an open source AI to be worth the name, it would have to use only open source training data - ie, stuff that is in the public domain or has been specifically had an open source licence assigned to it. If the creator hasn’t said they’re okay with their content being used for AI training, then it’s not valid for use in an open source AI.

          • Even_Adder@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            8 months ago

            I recommend reading this article by Kit Walsh, a senior staff attorney at the EFF if you haven’t already. The EFF is a digital rights group who most recently won a historic case: border guards now need a warrant to search your phone.

            People are trying to conjour up new rights to take another piece of the public’s right and access to information. To fashion themselves as new owner class. Artists and everyone else should accept that others have the same rights as they do, and they can’t now take those opportunities from other people because it’s their turn now.

            There’s already a model trained on just Creative Commons licensed data, but you don’t see them promoting it. That’s because it was not about the data, it’s an attack on their status, and when asked about generators that didn’t use their art, they came out overwhelmingly against with the same condescending and reductive takes they’ve been using this whole time.

            I believe that generative art, warts and all, is a vital new form of art that is shaking things up, challenging preconceptions, and getting people angry - just like art should.

            • frog 🐸@beehaw.org
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 months ago

              I’m actually fine with generative AI that uses only public domain and creative commons content. I’m not threatened by AI as a creative, because AI can only iterate on its own training data. Only humans can create something genuinely new and original. My objection is solely on the basis of theft. If we agree that everybody has the basic right to control their own data and content, than that logically has to extend to artists: they must have the right to control their own work, and consenting to humans viewing it isn’t the same as consenting to having it fed into an AI.

              I suspect there would be a lot more artists open to considering the benefits of a generative AI using only public domain and creative commons works if they weren’t justifiably aggrieved at having their life’s work strip-mined. Expecting the victims of exploitation to be 100% rational about their exploiter (or other adjacent parties trying to argue why it’s fine when they do it) isn’t reasonable. At this point, artists simply don’t trust the generative AI industry, and there needs to be a significant and concerted effort to rectify existing wrongs to repair that trust. One organisation offering a model based on creative commons artworks, when the rest of the generative AI industry is still stealing everything that’s not nailed down, does not promote trust. Regulate, compensate, mend some fences, and build trust. Then go and talk to artists, and have the conversations that should have been had before the first AI models were built. The AI industry needs to prove it can be trusted, and then learn to ask for permission. Then, maybe, it can ask for forgiveness.

  • darkphotonstudio@beehaw.org
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    8 months ago

    I’m an artist and I can guarantee his lawsuits will accomplish jack squat for people like me. In fact, if successful, it will likely hurt artists trying to adapt to AI. Let’s be serious here, copyright doesn’t really protect artists, it’s a club for corporations to swing around to control our culture. AI isn’t the problem, capitalism is.

  • moon_matter@kbin.social
    link
    fedilink
    arrow-up
    4
    ·
    8 months ago

    If successful, he could force the companies responsible for applications such as ChatGPT or Midjourney to compensate thousands of creators. They may even have to retire their algorithms and retrain them with databases that don’t infringe on intellectual property rights.

    They will readily agree to this after having made their money and use their ill gotten gains to train a new model. The rest of us will have to go pound sand as making a new model will have been made prohibitively expensive. Good intentions, but it will only help them by pulling up the ladder behind them.

  • Rivalarrival@lemmy.today
    link
    fedilink
    arrow-up
    1
    ·
    8 months ago

    Not going to happen, buddy.

    The fundamental purpose of copyright is to promote Science and the Useful Arts. The purpose is to expand our collective body of knowledge; to increase our collective intelligence.

    It is impossible to infringe on copyright by reading a book. Even if the book was illegally produced and distributed, the act of reading it is not a copyright violation. A natural mind cannot be denied access to published information through copyright law.

    That natural mind is restricted by not being allowed to produce or distribute a copy or a derivative work, but knowledge of the work and inspiration from that work are not restricted by copyright or patent law. Copyright exists specifically to promote providing knowledge to that mind.

    Blocking the development of an artificial mind fundamentally breaks the purpose for which copyright exists.

    • tardigrada@beehaw.orgOP
      link
      fedilink
      arrow-up
      1
      ·
      8 months ago

      This is a very simplified narrative if I may say so. I’d argue there is no such thing as an artificial ‘mind.’ What you call mind is a stochastic parrot. Whatever the bot yields, its whole work is being copied, because that’s the point of training a foundation model.

      The copyright laws in our current forms can’t simply be applied here. I’m not a laywer and can’t elaborate how we should address the issue legally, but the models’ results are 100 percent copied. There can be no doubt. There’s no mind that has created anything original.

  • ky56@aussie.zone
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    8 months ago

    Dumbass. YouTube has single-handedly proven how broken the copyright system is and this dick wants to make it worse. There needs to be a fair-er rebalancing of how people are compensated and for how long.

    What exactly that looks like I’m not sure but I do know that upholding the current system is not the answer.