• 1 Post
  • 49 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle


  • Ownership in general isn’t some fundamental inalienable right. It’s just that if you let people own things, you give them more incentive to make things. I think intellectual property rights are far too extensive, but if we didn’t have them at all, how would we pay for R&D? How would we pay for big budget games and movies? Maybe you’re happy contributing to openly licensed projects, but a lot of people have to pay for rent and raise a family, and can’t take the time to contribute to things like that even if they want to unless they have the money to support themselves.














  • I’m asking whether AIs are able and allowed to modify THEIR OWN code.

    Yes. They can write code. Right now the don’t have a big enough context window to write anything very useful, but scale everything up enough and they could.

    Scientists are continuously baffled by the universe - very physical thing - and things they discover there. The point is that the knowledge that a thing follows certain specific laws does not give us the understanding of it and the mastery over it.

    And my point is that neural networks don’t require understanding of whatever they’re trained on. The reason I brought up that human brains are turing complete is just to show that an algorithm for human-level intelligence exists. Given that, a sufficiently powerful neural network would be able to find one.



  • Are AIs we have at our disposal able and allowed to self-improve on their own?

    Yes. That’s what training is. There’s systems for having them write their own training data. And ultimately, an AI that’s good enough at copying a human can write any text that human can. Humans can improve AI by writing code. So can an AI. Humans can improve AI by designing new microchips. So can an AI.

    These are of course tongue-in-cheek examples of what a human brain can, but - from the persepctive of neuroscience, psychology and a few adjacent fields of study - it is absolutely incorrect to say that AIs can do what a human brain can, because we’re still not sure how our brains work, and what they are capable of.

    We know they follow the laws of physics, which are turing complete. And we have pretty good reason to believe that their calculations aren’t reliant on quantum physics.

    Individual neurons are complicated, but there’s no reason to believe they exact way they’re complicated matters. They’re complicated because they have to be self-replicating and self-repairing.