I know a lot of people want to interpret copyright law so that allowing a machine to learn concepts from a copyrighted work is copyright infringement, but I think what people will need to consider is that all that’s going to do is keep AI out of the hands of regular people and place it specifically in the hands of people and organizations who are wealthy and powerful enough to train it for their own use.

If this isn’t actually what you want, then what’s your game plan for placing copyright restrictions on AI training that will actually work? Have you considered how it’s likely to play out? Are you going to be able to stop Elon Musk, Mark Zuckerberg, and the NSA from training an AI on whatever they want and using it to push propaganda on the public? As far as I can tell, all that copyright restrictions will accomplish to to concentrate the power of AI (which we’re only beginning to explore) in the hands of the sorts of people who are the least likely to want to do anything good with it.

I know I’m posting this in a hostile space, and I’m sure a lot of people here disagree with my opinion on how copyright should (and should not) apply to AI training, and that’s fine (the jury is literally still out on that). What I’m interested in is what your end game is. How do you expect things to actually work out if you get the laws that you want? I would personally argue that an outcome where Mark Zuckerberg gets AI and the rest of us don’t is the absolute worst possibility.

  • Ragnell@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    @IncognitoErgoSum Honestly? Arguing against AI to anyone I can find and supporting any legal action to regulate the industry. That includes my boss when he considers purchasing an AI service.

    If find something that’s mine has been used to train an AI, I am willing to join a class action suit. The next work contract renegotiation I have will take into account the possibility of my writing being used to train, and it’ll be a no. I’m supporting the SAG-AFTRA and WGA strikes because those contracts will set important precedents on how AI can be used in creative industries at least, and will likely spread to other industries.

    And I think if enough people don’t buy into the hype, and are skeptical, and public opinion remains against it, then it’s less likely AI will be used in industries that need a strict safety standard until we get a regulatory agency for it.

    • IncognitoErgoSum@kbin.socialOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I get it, then.

      It’s more about the utilitarian goal of convincing people of something that it’s convenient for you if the public believes it, in order to protect yourself and your immediate peers from automation, as opposed to actually seeking the truth and sticking going with established legal precedent.

      Legally, your class action lawsuit doesn’t really have a leg to stand on, but you might manage to win anyway if you can depend on the ignorance of the judge and the jury about how AI actually works, and prejudice them against it. If you can get people to think of computer scientists and AI researches as “tech bros” instead of scientists with PHDs, you might be able to get them to dismiss what they say as “hype” and “fairy tales”.

      • Ragnell@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        I still say you’re wrong about how the AI actually works, man. You’re looking at it with rose-colored goggles, head filled with sci-fi catch phrases. But it’s just a math machine.

        • IncognitoErgoSum@kbin.socialOP
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          I’m looking at it with a computer science degree and experience with AI programming libraries.

          And yes, it’s a machine that simulates neurons using math. We simulate physics with math all the way down to the quantum foam. I don’t know what your point is. Whether it’s simulated neurons or real neurons, it learns concepts, and concepts cannot be copyrighted.

          I have a sneaking suspicion since you switched tactics from googling the wrong flowchart to accusing me of not caring about workers due to a contract dispute that’s completely unrelated to anything of the copyright stuff I’m talking about, I have a feeling you at least suspect that I know what I’m talking about.

          Anyway, since you’re arguing based on personal convenience and not fact, I can’t really trust anything that you say anyway, because we’re on entirely different wavelengths. You’ve already pretty much indicated that even if I were to convince you I’m right, you’d still go on doing exactly what you’re doing, because you’re on a crusade to save a small group of your peers from automation, and damn the rest of us.

          Best of luck to you.

          • Ragnell@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 year ago

            Yeah, we’re on different wavelengths. But I do have over twenty years in cyber transport and electronics. I know the first four layers in and out, including that physical layer it seems just about all programmers forget about completely.

            It’s not learning. It’s not reading. It’s not COMPREHENDING. It is processing. It is not like a person.

            I admit, I’m firing from any direction I can get an angle at because this idea that these programs are actual AGI and are comparable to humanity is well… dangerous. There are people with power and influence who want to put these things in areas that WILL get people hurt. There are people who are dying to put them to work doing every bit of writing from scripts to NOTAMs and they are horrifically unreliable because they have no way of verifying the ACCURACY of what they right. They do not have the ability to make a judgement, which is a key component of human thinking. They can only favor the set result coming through the logic gate. If A and B enter, B comes out. If A and A enter, A comes out. It has no way to evaluate whether A or B is the actual answer.

            You call it a small group of my peers, but everyone is in trouble because people with money are SEVERELY overestimating the capabilities of these programs. The danger is not that AI will take over the world, but that idiots will hand AI the world and AI will tank it because AI does not come with the capabilities needed to make actual decisions.

            So yeah, I bring up the WGA/SAG-AFTRA strike. Because that happens to be the best known example of the harm being done not by the AI, but by the people who have too much faith in the AI and are ready to replace messy humans of all stripes with it.

            And I argue with you, because you have too much faith in the AI. I’m not impressed by your degree to be perfectly honest because in my years in the trade I have known too many people with that degree who think they know way more than they do and end up having to rely on people like me to keep them grounded in what actually can be accomplished.

            • IncognitoErgoSum@kbin.socialOP
              link
              fedilink
              arrow-up
              0
              ·
              1 year ago

              What, specifically, do you think I’m wrong about?

              If it’s the future potential of AI, that’s just a guess. AGI could be 100 years away (or financially impossible) as easily as it could be 5 years. AGI is in the future still, and nobody is really qualified to guess when it’ll come to fruition.

              If you think I’m wrong about the present potential of AI, I’ve already seen individuals with no budget use it to express themselves in ways that would have required an entire team and lots of money, and that’s where I believe its real potential right now lies. That is, opening up the possibility for regular period to express themselves in ways that were impossible for them before. If Disney starts replacing animators with AI, I’ll be right there with you boycotting them. AI should be for everyone, not for large corporations that can already afford to express themselves however they want.

              If you think I’m wrong that AIs like ChatGPT and Stable Diffusion do their computing with simulated neurons, let me know and I’ll try to find some literature about it from the source. I’ve had a lot of AI haters confidently tell me that it doesn’t (including in this thread), and I don’t know if you’re in that camp or not.

              • Ragnell@kbin.social
                link
                fedilink
                arrow-up
                0
                ·
                1 year ago

                I don’t think we know enough about the human brain to actually replicate it in electronics.

                • IncognitoErgoSum@kbin.socialOP
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  1 year ago

                  So what does that mean? Do you not believe that AIs like ChatGPT and Stable Diffusion have neural networks that are made up of simulated neurons? Or are you saying that we haven’t simulated an actual human brain? Because the former is factually incorrect, and I never claimed the latter. Please explain exactly what “hype” you believe I’m buying into? Because I don’t think you have any clue what it is you think I’m wrong about. You just really don’t want me to be right.

                  • Ragnell@kbin.social
                    link
                    fedilink
                    arrow-up
                    0
                    ·
                    1 year ago

                    I think they simulate what some people think neurons are like. I mean, I guess you can get the binary neurons fine but there are analog neurons (and that is something that has just now been proven). But there are so many value inputs in the human brain that we haven’t isolated, so much about it we haven’t mapped. We don’t even know how the electricity is encoded. So no, I don’t think what you’re calling a “neural network” is ACTUALLY simulating the human brain.

                    The hype you’re buying into is that AI will improve our lives just by existing. Thing is, any new tech is a weapon in the hands of the rich whether it’s available to the common man or not. We need to focus on setting the rules for the rich and enforcing the rules we have. Copyright, which is also a weapon in the hands of the rich yes, has aspects which are made to protect the common man and we need to enforce those to keep the rich in line while we have them. If someday we junk copyright, it needs to be as a whole. We can’t go chucking copyright for small time authors while the courts are still allowing Disney to keep Mickey Mouse out of the public domain, which is what you suggest doing when you suggest copyright should be ignored so that the common man can make their own AI.

                    I think I’ve softened on quite a bit with your arguments, honestly. It’s unfair to say I just don’t want you to be right. My position remains that I think copyright is a fair place for limitation on AI training.