• higgs@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      1 year ago

      There will always be loopholes. The nice thing with AI is that it’s constantly learning and adapt to new situations very fast.

      • BURN@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        So that’s not inherently true. AI (at least in this sense of it actually being Machine Learning) does not learn on the fly. It learns off base data and applies those findings until it’s retrained again.

        • higgs@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          You’re correct and that’s way more efficient than teaching dozens of people what do ban. People make mistakes, Tech doesn’t (as long as it’s coded correctly)

          • BURN@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            I reject that pretty majorly. Tech makes mistakes at a much higher rate than humans, even when built correctly. Tech just makes consistent mistakes instead.

            I don’t trust AI moderation of anything.

            • higgs@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Do you have an example for a correctly built tech stuff which makes constant mistakes?

              • BURN@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Pretty much any AI system.

                Photo AI still have issues determining between dogs and cats. Cancer detection AIs were analyzing x-rays and basing decisions off the doctor who signed them.

                The Boeing 737 MAX built a properly working Autopilot system, but didn’t train pilots correctly, causing pilots to expect functionality similar to older versions and causing plane crashes. The software was 100% right, but it made mistakes because the human input was different than expected.

    • Katana314@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Whenever a law is invented to apply protections, someone always points out that a criminal mastermind can circumvent that protection.

      That often doesn’t matter, because intelligent people have no motivation to breach the protection, and less intelligent people fall into the trap. Even with some circumvention, it can catch a large number of bad actors.

      It’s like saying “Fishing won’t work because fish will just learn to swim around nets”.