I’m rather curious to see how the EU’s privacy laws are going to handle this.

(Original article is from Fortune, but Yahoo Finance doesn’t have a paywall)

    • Veraticus@lib.lgbt
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      1 year ago

      How is that germane to this question? Do you agree humans can experience mental phenomena? Like, do you think I have any mental models at all?

      If so, then that is the difference between me and an LLM.

      • SatanicNotMessianic@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I think you have a mental model and that it is analogous to the model created in an LLM in that it is representable by a semantic graph/n-dimensional matrix relating concepts that are realized via terms.

        You have never in your life encountered a dodo. You know what a dodo is (using the present these because I’m talking about a concept). It is a bird, so it relates evolutionarily and ecologically to “bird.” It’s flightless, so it relates to “patriarch” and “emu.” It is extinct, so it relates to all of the species extinction ideas you have. Humans perhaps contributed to the extinction, so it links to human-caused ecological change, which in turn links to human-caused climate change. Human-introduced invasive species are are causing ecological change in Australia, and that may have been a major factor in driving the dodo to extinction. People ate them, so maybe in your head it has a relation to wild turkeys. And so on. That’s how minds work. That’s how the human cognitive model of the world works. That’s how LLMs work.

        Visualize an n-dimensional space in which these semantic topics are embedded. The interpretation of the dimensions don’t matter. Instead, we’re just worried about the distances between concepts. Dodo is closer to turkey than it is to snake. Dodo is closer to snake than it is to rock. Dodo is closer to rock than it is to the feeling of melancholy I get when listening to Tori Amos. We can grasp this intuitively. We can mathematize it by formally placing the various concepts in a metric space.

        There’s a lot more to unpack, from neural correlates of consciousness to cognitive linguistics and embodied learning using metaphorical reasoning, but that’s kind of the gist of it boiled down to an overly long post.

        • Veraticus@lib.lgbt
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          1 year ago

          That’s how LLMs work.

          This is not how LLMs work. LLMs do not have complex thought webs correlating concepts birds, flightlessness, extinction, food, and so on. That is how humans work.

          An LLM assembles a mathematical model of what word should follow any other word by analyzing terabytes of data. If in its training corpus the nearest word to “dodo” is “attractive,” the LLM will almost always tell you that dodos are attractive. This is not because those concepts are actually related to the LLM, because the LLM is attracted to dodos, or because LLMs have any thoughts at all. It is simply the output of bunch of math based on word proximity.

          Humans have cognition and mental models. LLMs have frequency and word weights. While you have correctly identified that both of these things can be portrayed as n-dimensional matrixes, you can also use those tools to describe electrical currents or the movement of stars. But those things contain no more thought and have no more mental phenomenon occurring in them than LLMs.

            • Veraticus@lib.lgbt
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              No, they embed word weights in metric spaces. Human thought is more like semantic concepts in a metric space (though I don’t think that’s entirely unequivocal, human thought is not very well-understood). Even if the space is similar what’s in them is definitely not.