• shiveyarbles@beehaw.org
    link
    fedilink
    English
    arrow-up
    33
    ·
    1 year ago

    This iteration of “AI” is hilarious, the people who were telling us not to steal IP are now trying to convince us it’s ok.

  • Square Singer@feddit.de
    link
    fedilink
    arrow-up
    19
    ·
    1 year ago

    That’s kinda weird regarding copyright. With copyright, all usage permissions are opt-in. Any kind of usage that isn’t expressly allowed is prohibited.

    Except of fair use, which cannot be prohibited.

    So either AI training is fair use and thus cannot be prohibited, or it isn’t and then it’s already prohibited.

    Either way, expressly prohibiting it does nothing, legally speaking.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    🤖 I’m a bot that provides automatic summaries for articles:

    Click here to see the summary

    In early August, The New York Times updated its terms of service (TOS) to prohibit scraping its articles and images for AI training, reports Adweek.

    The move comes at a time when tech companies have continued to monetize AI language apps such as ChatGPT and Google Bard, which gained their capabilities through massive unauthorized scrapes of Internet data.

    Further down, in section 4.1, the terms say that without NYT’s prior written consent, no one may “use the Content for the development of any software program, including, but not limited to, training a machine learning or artificial intelligence (AI) system.”

    Using a process called unsupervised learning, the web data was fed into neural networks, allowing AI models to gain a conceptual sense of language by analyzing the relationships between words.

    The controversial nature of using scraped data to train AI models, which has not been fully resolved in US courts, has led to at least one lawsuit that accuses OpenAI of plagiarism due to the practice.

    Last week, the Associated Press and several other news organizations published an open letter saying that “a legal framework must be developed to protect the content that powers AI applications,” among other concerns.

  • Dizzy Devil Ducky@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    A large problem with this kind of TOS change is what happens if we ever end up with sentient AI that can think on its’ own?

    How would you stop that sentient AI from scraping your site they are scraping it by going directly into your article, copying it word for word, and sending it to their own training algorithm without blocking access from everyone?

    Paywalls can be bypassed and AI has been found to be better at solving those puzzles meant to stop them, so there isn’t a good solution that I can think of that doesn’t endanger the whole internet.

    • roguetrick@kbin.social
      link
      fedilink
      arrow-up
      26
      ·
      edit-2
      1 year ago

      Listen buddy, if we get artificial general intelligence the last thing we gotta worry about is it reading the paper.

    • d3Xt3r@beehaw.org
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      1 year ago

      There’s no need to wait for a sentient AI for that. I mean, the current publicized method for blocking these bots is via robots.txt, which is only a very polite way of asking bots to duck off - they really have no reason to respect it, if they wanted to. OpenAI (or anyone else) could also use multiple public proxy servers for scraping, so websites won’t be able to point fingers at them. Even if the bot makers avoid using proxies, they could still get the content indirectly by scraping other sites which repost the content, such as archive.org or even just normal sites which repost stuff. Heck, they could scrape off say, Lemmy indirectly, for instance we’ve got the AutoTLDR bot here, combine that with comments and quotes from several people, and any competent LLM could easily learn the content of the original article without even having to directly touch it.

      So unless the site has posted a 100% unique piece of information, which hasn’t been published anywhere else, AND they’ve also implemented a strict “no reproduction in any form” rule that also extends to prohibiting any discussion of the source material, it would be near-impossible to stop or blame the bot creators of bypassing ToS. And we all know what happens when you go to great lengths to try and silence a subject matter on the internet…

  • Beej Jorgensen@lemmy.sdf.org
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    1 year ago

    This is going to be interesting. Let’s say I buy an article then copy the entire thing and send it to my friend the AI enthusiast. I’ve certainly violated copyright law.

    But if my friend then goes on to run the article through an algorithm, it’s not at all clear to me that there’s been a copyright violation by them.

    Or, indeed, how you could word a law that prohibits algorithmic consumption of the data without making it impossible to ever simply view the data.

    • Your friend has no license to the content at all, so they can’t legally use it in the first place. If I drive a car you stole, that doesn’t mean I’m let off the hook once I get caught just because you were the one committing the actual crime.

      Your friend knows you don’t work for the New York Times so he should know you don’t have the authority to grant any license. From a copyright standpoint, you can’t even download an HTML copy of an article you paid for and share it with your friends, no matter what they do with it.

      In practice you’ll get caught and forced to pay up while your friend will probably be let off the hook, just like when you’re sharing your stash of pirated movies with the neighbourhood.

      Copyright forbids much more than people think it does. You can assume all digital reproduction (which includes copying files to a flash drive or uploading the to a file sharing website) you don’t own or have explicit permission to use (i.e. public domain stuff) to be illegal. There are exemptions (academic exemptions are the strongest ones in this case) but the often cited “fair use” argument usually isn’t a solid defence unless you’re willing to prove in court that you do comply with every single pillar of fair use.

      • Beej Jorgensen@lemmy.sdf.org
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I’ll simplify, then. Can I download an article that I’ve paid for and have permission to download, then have an algorithm operate on that data?

        • You can download the article and do with it what you wish. Copyright law mostly governs redistributing it.

          If you distribute that algorithm, things become hairy. There are various lawsuits going on about AI and AI models right now, that will probably take years to culminate in anything we can refer to. From what I’ve read from other lawyers, this stuff can really go either way at the moment.

          Unless we’ll get AI legislation beforehand, we’ll have to wait and see what the judges say about this.

      • Even_Adder@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I think you are oversimplifying this issue and ignoring the context and purpose of using their content. Original analysis of data is not illegal, and that’s all these models are, a collection of observations in relation to each other. As long as you can prove that your storage was noncommercial, and no more than necessary to achieve your fair use objectives, you can get by.

        Fair use protects reverse engineering, indexing for search engines, and other forms of analysis that create new knowledge about works or bodies of works. Moreover, Fair use is a flexible and context-specific doctrine, and you don’t have to prove in court that you comply with every single pillar of fair use. It depends on the situation and four things: why, what, how much, and how it affects the work. No one thing is more important than the others, and it is possible to have a fair use defense even if you do not meet all the criteria of fair use.

        You’re right about copyright forbidding much more than people think, but it also allows much more than people think. Fair use is also not a weak or unreliable defense, but a vital one that protects creativity, innovation, and freedom of expression. It’s not something that you have to prove in court, but something you assert as a right.

        • Fair use does cover creative endeavors, that is true, but on the internet everyone claims fair use about just about everything. The silliest examples are the people pasting fair use case text underneath entire videos and songs they’ve ripped and uploaded to Youtube as if it’s some kind of spell to keep the RIAA away.

          Creative work, like parodies, reviews, and quotes, are allowed, but there are also strong limits on this depending on the context. Music sampling, for example, is not fair use, even if you use short bits, and is subjected to licensing deals from music studios, even if you recorded them from the radio.

          What I was responding to here is the idea that of running an automated program on information shared without permission. In that case, the fair use argument becomes very difficult to make, in my opinion. Search engines and other forms of analysis is definitely allowed, but those copies are provided through legitimate means. Downloading articles from behind a paywall and sharing them isn’t the same as indexing publicly available web pages.

          • Even_Adder@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            What I was responding to here is the idea that of running an automated program on information shared without permission. In that case, the fair use argument becomes very difficult to make, in my opinion. Search engines and other forms of analysis is definitely allowed, but those copies are provided through legitimate means. Downloading articles from behind a paywall and sharing them isn’t the same as indexing publicly available web pages.

            I’m not saying you should get anything through illicit means, you could just view the web page yourself rather than sending it to anyone else. For example LAION, provides links to internet data, they don’t download or copy content from sites. By visiting it yourself, you’d dodge all those problems.

  • quasar@aussie.zone
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    This whole thing has made me wonder given search engine indexing of articles and creation of search engine knowledge graphs.

    Though I assume stuff behind paywalls isn’t indexed.

    • lloram239@feddit.de
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      There is also BingChat, which is just straight up ChatGPT with integrated BingSearch. Guess that does count as “search engine”, not AI, since it provides links to the source, which plain ChatGPT can’t.