In addition to the possible business threat, forcing OpenAI to identify its use of copyrighted data would expose the company to potential lawsuits. Generative AI systems like ChatGPT and DALL-E are trained using large amounts of data scraped from the web, much of it copyright protected. When companies disclose these data sources it leaves them open to legal challenges. OpenAI rival Stability AI, for example, is currently being sued by stock image maker Getty Images for using its copyrighted data to train its AI image generator.

Aaaaaand there it is. They don’t want to admit how much copyrighted materials they’ve been using.

  • Tarte@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    If I would create a very slow AI that takes 10 or 100 hours for each response, would that make it any better in your opinion? I do not think calculation speed of a software is a good basis for legislation.

    If analyzing a piece of art and replicating parts of it without permission is illegal, then it should be illegal regardless of the tools used. However, that would make every single piece of art illegal, so it’s not an option. If we make only the digital tools illegal then the question still remains where to draw the line. How much inefficiency is required for a tool to still be considered legal?

    Is Adobe Photoshop generative auto-fill legal?
    Is translating with deepl.com or the modern Google Translate equivalent legal?
    Are voice activated commands on your mobile phone legal (Cortana, Siri, Google)?

    All of these tools were trained in similar ways. All of these take away jobs (read: make work/life more efficient).

    It’s hard to draw a line and I don’t have any solution to offer.