I’ve gone down a rabbit hole here.

I’ve been looking at lk99, the potential room temp superconductor, lately. Then I came across an AI chat and decided to test it. I then asked it to propose a room temp superconductor and it suggested (NdBaCaCuO)_7(SrCuO_2)_2 and a means of production which got me thinking. It’s just a system for looking at patterns and answering the question. I’m not saying this has made anything new, but it seems to me eventually a chat AI would be able to suggest a new material fairly easily.

Has AI actually discovered or invented anything outside of it’s own computer industry and how close are we to it doing stuff humans haven’t done before?

  • MajinBlayze@lemm.ee
    link
    fedilink
    arrow-up
    41
    ·
    edit-2
    1 year ago

    It’s important to be clear what kind of actual system you’re using when you say “AI”.

    If you’re talking about something like ChatGPT, you’re using an LLM, or “Large Language Model”. Its goal is to produce something that reasonably looks like a human wrote it. It has reviewed a ridiculous amount of human text, and has a metric assload of weights associating the relationships between these words.

    If the LLM sees your question and associates a particular compound with superconductors, it’s because it’s seen these things related in other writings (directly or indirectly) or at least sees the relationship as plausible.

    It’s important not to ascribe more intent behind what your seeing than exists. It can’t understand what a superconductor is or how materials can achieve the state, it’s just really good at relaying related words in a convincing manner

    That’s not to say it isn’t cool or useful, or that ML(Machine Learning) can’t be used to help find answers to these kinds of questions.

    • oakey66@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 year ago

      Exactly. It’s just text prediction software that is really good at making itself sound plausible. It could tell you something completely false and have no idea it’s stating a lie. There’s no intelligence here. It’s a very precise word guesser. Which is great for specific settings. But there’s a huge amount of hype associated with this tool and it’s very much by design (by tech companies).

    • treadful@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      If the LLM sees your question and associates a particular compound with superconductors, it’s because it’s seen these things related in other writings (directly or indirectly).

      I’m not convinced of this. LLMs haven’t been just spitting out prior art, despite what some people seem to suggest. It’s not just auto-complete, that’s just a useful analogy.

      For instance, I’m fascinated by the study that got GPT4 to draw a unicorn using LaTeX. It wasn’t great, but it was recognizable to us as a unicorn. And apparently that’s gotten better with iterations. GPT (presumably) has no idea what a unicorn looks like, except through text descriptions. Who knows how it goes from written descriptions of a mythical being to a 2d drawing with a markup language without being trained on images, imagery, or any concept of what things look like.

      It’s important not to ascribe more intent behind what your seeing than exists.

      But also, this is true as well. I’m trying hard not to anthropomorphize this LLM but it sure seems like there’s some emergent effect that kind of looks like an intelligence to a layman like myself.

      • MajinBlayze@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        To be clear, I’m not trying to make the argument that it can only produce exactly what it’s seen, I recognize that this argument is frankly overstated in media. (The interviews with Adam Conover are great examples; he’s not wrong per se, but he does oversimplify things to the point that I think a lot of people misunderstand what’s being discussed)

        The ability to recombine what it’s seen in different ways as an emergent property is interesting and provocative, but isn’t really what OP is asking about.

        A better example of how LLMs can be useful in research like what OP described would be asking it to coalesce information from multiple existing studies about what properties correlate with superconducting in order to help accelerate research in collaboration with actual material scientists. This is all research that could be done without LLMs, or even without ML, but having a general way to parse and filter these kinds of documents is still incredibly powerful, and will be a sort of force multiplication for these researchers going forward.

        My favorite example of the limitation on LLM’s is to ask it to coin a new word, then google that word. It physically is unable to produce a combination of letters that it doesn’t have indexed, and it doesn’t have an index for words it hasn’t seen. It might be able to create a new meaning for a word that it’s seen, but that isn’t necessarily the same.

  • GoLDElox@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    1 year ago

    i believe google created something to do with protein folding or something using deepmind

    idk tho

    • DLBPointon@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      Alphafold2 Pretty cool tech that I used in my degree to solve an impossible test posed by my tutor.

      The fact that this thing can get so close to x-ray crystallography results is just amazing.

  • Ziggurat@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    1 year ago

    Define AI.

    Machine learning, image processing and similar “AI” techniques have been used for decades in sciences. When a telescope does large field survey to detect an transient phenomena like a supernova, you don’t have an astrophysicist looking at the photo, a smart astrophysicist (well several ones) used image processing and machine learning to teach the computer that there is something interesting on the image and send an automated message to every telescope about : Something is happening at this position so they can watch it immediately. Is it AI ?

    What about the LHC who takes so much data that they need very advanced algorithm to store them and process them, is that AI ? What about protein folding which is a very complex problem relying on machine learning to find the proper solution

    The big and recent breakthrough is that it’s now accessible on an ordinary computer and that the training of large model is cheap enough to use it for less complicated topics

  • fidodo@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    Yes, AI has and is being used extensively for research already, particularly in problem spaces where pattern matching can yield particularly powerful results for solution searches, which is actually a lot of problem spaces. Protein structures are probably the best example.

  • Maharashtra@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    The AIs we have at our disposal can’t invent a thing - yet - because they aren’t true AIs - again: yet.

    They are merely, and should be perceived as tools, nothing more. It’s the people who use them that may apply them to tasks that will result in invention, but on their own, they are closer to the Chinese Room principle, than to thinking and inventive constructions.

    • Archpawn@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I agree with the basic idea, but there’s not some fundamental distinction between what we have now and true AI. Maybe we’ll find breakthroughs that help, but the systems we’re using now would work given enough computing power and training. There’s nothing the human brain can do that they can’t, so with enough resources they can imitate the human brain.

      Making one smarter than a human wouldn’t be completely trivial, but I doubt it would be all that difficult given that the AI is powerful enough to imitate something smarter than a human.

      • Maharashtra@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        I agree with the basic idea, but there’s not some fundamental distinction between what we have now and true AI.

        Are AIs we have at our disposal able and allowed to self-improve on their own? As in: can they modify their own internal procedures and possibly reshape their own code to better themselves, thus becoming more than their creators predicted them to be?

        There’s nothing the human brain can do that they can’t, so with enough resources they can imitate the human brain.

        Human brain can:

        • interfere with any of its “hardware” and break it
        • go insane
        • preocupy itself with absolutely pointless stuff
        • create for the sake of creation itself
        • develop and upkeep illusions it will begin to trust to be real
        • choose ad act against undeniable proof given to it

        These are of course tongue-in-cheek examples of what a human brain can, but - from the persepctive of neuroscience, psychology and a few adjacent fields of study - it is absolutely incorrect to say that AIs can do what a human brain can, because we’re still not sure how our brains work, and what they are capable of.

        Based on some dramatic articles we see in news that promise us “trauma erasing pills”, or “new breakthrough in healing Alzheimer” we may tend to believe that we know what this funny blob in our heads is capable of, and that we have but a few small secrets to uncover, but the fact is, that we can’t even be sure just how much is there to discover.

        • Archpawn@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Are AIs we have at our disposal able and allowed to self-improve on their own?

          Yes. That’s what training is. There’s systems for having them write their own training data. And ultimately, an AI that’s good enough at copying a human can write any text that human can. Humans can improve AI by writing code. So can an AI. Humans can improve AI by designing new microchips. So can an AI.

          These are of course tongue-in-cheek examples of what a human brain can, but - from the persepctive of neuroscience, psychology and a few adjacent fields of study - it is absolutely incorrect to say that AIs can do what a human brain can, because we’re still not sure how our brains work, and what they are capable of.

          We know they follow the laws of physics, which are turing complete. And we have pretty good reason to believe that their calculations aren’t reliant on quantum physics.

          Individual neurons are complicated, but there’s no reason to believe they exact way they’re complicated matters. They’re complicated because they have to be self-replicating and self-repairing.

          • Maharashtra@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Yes. That’s what training is.

            I’m not talking about building a database of data harvested from external sources. I’m not talking about the designs they make.

            I’m asking whether AIs are able and allowed to modify THEIR OWN code.

            We know they follow the laws of physics, which are turing complete.

            Scientists are continuously baffled by the universe - very physical thing - and things they discover there. The point is that the knowledge that a thing follows certain specific laws does not give us the understanding of it and the mastery over it.

            We do not know the full extent of what our brains are capable of. We do not even know where “the full extent” may end. Therefore we can’t say that AIs are capable to do what our brains can, even if the underlying principle seem “basic” and “straightforward”.

            It’s like comparing a calculator to a supercomputer and claiming the former can do what the latter does, because “it’s all 0s and 1s, man”. 😉

            • Archpawn@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              I’m asking whether AIs are able and allowed to modify THEIR OWN code.

              Yes. They can write code. Right now the don’t have a big enough context window to write anything very useful, but scale everything up enough and they could.

              Scientists are continuously baffled by the universe - very physical thing - and things they discover there. The point is that the knowledge that a thing follows certain specific laws does not give us the understanding of it and the mastery over it.

              And my point is that neural networks don’t require understanding of whatever they’re trained on. The reason I brought up that human brains are turing complete is just to show that an algorithm for human-level intelligence exists. Given that, a sufficiently powerful neural network would be able to find one.

              • Maharashtra@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                1 year ago

                Yes. They can write code.

                You don’t seem to understand me, or are trying very hard to not understand me.

                I’ll try again, but if it fails, I’ll assume it’s “bring horse to the water” case.

                So: can AIs write their own code? As in “rewrite the code that is them”? Not write some small pieces of code, a small app, but can their write THEIR OWN code, the one that makes them run?

                And my point is that neural networks don’t require understanding of whatever they’re trained on.

                Your point does not address my argument.

                You can’t compare a thing to a thing you neither understand nor can predict its capabilities.

  • nandeEbisu@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    AI is a super broad topic, I’ve heard people refer to Principal Component Analysis, which is from the 1930s, as “Machine Learning” or “AI”. In reality its just that we have infrastructure and data at scale to start applying techniques in larger contexts.

    I know pharmaceutical companies have been using AI in drug discovery for probably a decade now, but those models are very different from what a large language model looks like, and you still have a human sifting through the results and performing validation on a physical system to make sure the compounds do what is predicted safely, most of which do not.

    When you ask something like ChatGPT a question like that, its doing something akin to looking up the most recent papers on the subject that it was trained on, and outputting something that looks like a chemical compound that the paper would be generated. It doesn’t have an understanding of what that formula means, only that when you arrange letters in that way, it looks superficially similar to what would have been in the paper. Its like in movies, when they need to express someone doing math, they just fill a chalk board with random equations that look like advanced math at first glance, but might be introductory level material, or even just gibberish.

  • keyboardpithecus@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    A real AI would have explained you that with great probability a room temperature (and atmospheric pressure) superconductor is not possible.

    Few experiments were successful with small grains tested under enormous pressures. But apart from that a room temperature superconductor is unlikely due to the high entropy.

  • bradorsomething@ttrpg.network
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    The AI most regular people are using, you can think of as better at sifting the sea of data to find what you are looking for, and discussing it with you in a more understandable way.

    The breakthroughs with AI are often computer programs that are told to look for an answer and change themselves somewhat randomly, but with many, many copies trying the task. The ones that fail are deleted, and the most successful change up again. This is repeated until you reliably get your result.

  • Send_me_nude_girls@feddit.de
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    You wouldn’t use an LLM (chat bot to appear human) for finding new materials, you’d specialize it with physical and chemical data and give it technically requirements. Something similar is done via AI for genome sequencing. AI is also used to design complex structures that are hard for humans to get their heads behind, like a new structure of a CPU. AI can also create new art, but depending on the model, similar to how humans do it, it will take more inspiration or less from already existing works. AI is not completely new, as we’ve used machine learning to filter stuff via pattern or finding anomalies for decades. It just never was this easy to train your own model and generate usable results.

  • Botree@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    5
    ·
    1 year ago

    The “breakthrough” likely happened a long time ago, but as with all tech, it was only recently accessible to the general public. LLM alone isn’t even that sophisticated to begin with.

    AI assistant will soon be an essential part of our lives. It will handle grocery shopping based on your dietary requirements, conduct basic diagnosis of your health, create personalized software, books, music, and movies on the fly for you, do your taxes and offer financial advices. All these are already happening.

    • nandeEbisu@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      I assume you are referring to transformers, which came out in the literature around 2017. Attention on its own is significantly older, but wasn’t really used in a context that came close to being used as a large language model until the early / mid 2010s.

      While attention is fairly simplistic, a trait which helps it parallelize well and scale well, there is a lot of research that came about recently around how the text is presented to the model, and the size of the models. There is also a lot of sophistication around instruction tuning and alignment as well which is how you get from simple text continuation to something that can answer questions. I don’t think you could make something like chatGPT using just the 2017 “Attention is All You Need” paper.

      I suspect that publicly released models lags whatever google or OpenAI has figured out by 6 months to a year, especially because there is now a lot of shareholder pressure around releasing LLM based products. Advancement that are developed in the open source community, like apply LoRA and quantization in various contexts, has a significantly shorter time between development and release.