The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con

The new era of tech seems to be built on superstitious behaviour

  • Ragnell@kbin.socialOP
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    @matjoeman Well, we kinda do. Computers at their most basic circuit level use logic gates, and perform functions by doing mathematics. At the base, so that they can communicate even within microchips, they must be binary coded. Even if over that on/off there’s octal, hex, decimal, there must be a binary code at the core, two possibilities. We expand upon that by adding more paths, more logic gates, more complexity but the signal is a square. Two voltages.

    Even as a computer recreates a sound for a human’s ear, a sound that is a sine wave, it is still digitally encoded. Meaning it’s a complex string of bits. It’s two voltages that are being manipulated by logic gates to produce sound in a sine wave. But it’s two voltages.

    Human brains, however, are processing those sine waves, those complex frequencies that go across many voltages, as a spectrum. They aren’t boiling it down to two voltages, they aren’t basing it all on two voltages.

    I’m not saying we’ll NEVER get AI, but I think we need a revolutionarily different way of transferring information WITHIN the microchip to achieve that level of complexity.

    • matjoeman@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      I don’t think there’s a fundamental reason why you couldn’t program AI digitally. Maybe there’s some high level reason why it needs analog processing but I doubt it.

      ML models use floating point numbers which approximate continuous values. An analog computer like you are describing could maybe speed up those calculations but it wouldn’t fix the fact that ML models just can’t be intelligent because of how they work (in my opinion).

      • Ragnell@kbin.socialOP
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        @matjoeman Maybe, but I don’t think we’re anywhere need the complexity yet and attempts to relate the way humans think to the way computers process aren’t useful.