• 1 Post
  • 86 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle
  • As always, the problem is our economic system that has funneled every gain and advance to the benefit of the few. The speed of this change will make it impossible to ignore the need for a new system. If it wasn’t for AI, we would just boil the frog like always. But let’s remember the real issue.

    If a free food generating machine is seen as evil for taking jobs, the free food machine wouldn’t be the issue. Stop protesting AI, start protesting affluent society. We would still be suffering under them even if we had destroyed the loom.



  • Perhaps instead we could just restructure our epistemically confabulated reality in a way that doesn’t inevitably lead to unnecessary conflict due to diverging models that haven’t grown the necessary priors to peacefully allow comprehension and the ability exist simultaneously.

    breath

    We are finally coming to comprehend how our brains work, and how intelligent systems generally work at any scale, in any ecosystem. Subconsciously enacted social systems included.

    We’re seeing developments that make me extremely optimistic, even if everything else is currently on fire. We just need a few more years without self focused turds blowing up the world.


  • Peanut@sopuli.xyztoTechnology@lemmy.worldGenerative A.I - We Aren’t Ready.
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    4 months ago

    AI or no AI, the solution needs to be social restructuring. People underestimate the amount society can actively change, because the current system is a self sustaining set of bubbles that have naturally grown resilient to perturbations.

    The few people who actually care to solve the world’s problems are figuring out how our current systems inevitably fail, and how to avoid these outcomes.

    However, the best bet for restructuring would be a distributed intelligent agent system. I could get into recent papers on confirmation bias, and the confabulatory nature of thought, on the personal level, group level, and society level.

    Turns out we are too good at going with the flow, even when the structure we are standing on is built over highly entrenched vestigial confabulations that no longer help.

    Words, concepts, and meanings change heavily depending on the model interpreting them. The more divergent, the more difficulty in bridging this communication gap.

    a distributed intelligent system could not only enable a complete social restructuring with autonomy and altruism both guaranteed, but with an overarching connection between the different models at every scale, capable of properly interpreting the different views, and conveying them more accurately than we could have ever managed with model projection and the empathy barrier.



  • I definitely agree that copyright is a good half century in need of an update. Disney company and other contemporaries should never have been allowed the dominance and extension of copywrite that allows what feels like ownership of most global artistic output. They don’t need AI, they have the money and interns to create whatever boardroom adjusted art they need to continue their dominance.

    Honestly I think the faster AI happens, the more likely it is that we find a way out of the social and economical hierarchical structure that feels one step from anarcho-capitalistic aristocracy.

    I just hope we can find the change without riots.


  • And you violate copyright when you think about copywritten things alone at night.

    I violate copyright when i draw Mario and don’t sell it to anybody.

    Or these are dumb stretches of what copyright is and how it should be applied.

    the reasoning in this article is dumb and all over the place.

    Seems like gary marcus being gary marcus.

    Already seen openAI calling out some of the bullshit specifically noted in this. That doesn’t matter though, damage is done and people WANT to believe ai is terrible in every way.

    Everyone is just deadfast determined to climb onto the gary marcus unreasonable AI hate train no matter what.


  • I conflate these things because they come from the same intentional source. I associate the copywrite chasing lawyers with the brands that own them, it is just a more generalized example.

    Also an intern who can give you a songs lyrics are trained on that data. Any effectively advanced future system is largely the same, unless it is just accessing a database or index, like web searching.

    Copyright itself is already a terrible mess that largely serves brands who can afford lawyers to harass or contest infringements. Especially apparent after companies like Disney have all but murdered the public domain as a concept. See the mickey mouse protection act, as well as other related legislation.

    This snowballs into an economy where the Disney company, and similarly benefited brands can hold on to ancient copyrights, and use their standing value to own and control the development and markets of new intellectual properties.

    Now, a neuralnet trained on copywritten material can reference that memory, at least as accurately as an intern pulling from memory, unless they are accessing a database to pull the information. To me, sueing on that bases ultimately follows the logic that would dictate we have copywritten material removed from our own stochastic memory, as we have now ensured high dimensional informational storage is a form of copywrite infringement if anyone instigated the effort to draw on that information.

    Ultimately, I believe our current system of copywrite is entirely incompatible with future technologies, and could lead to some scary arguments and actions from the overbearing oligarchy. To argue in favour of these actions is to argue never to let artificial intelligence learn as humans do. Given our need for this technology to survive the near future as a species, or at least minimize the excessive human suffering, I think the ultimate cost of pandering to these companies may be indescribably horrid.


  • Music publishers sue happy in the face of any new technological development? You don’t say.

    If an intern gives you some song lyrics on demand, do they sue the parents?

    Do we develop all future A.I. Technology only when it can completely eschew copyrighted material from their comprehension?

    "I am sorry, I’m not allowed to refer to the brand name you are brandishing. Please buy our brand allowance package #35 for any action or communication regarding this brand content. "

    I dream of a future when we think of the benefit of humanity over the maintenance of our owners’ authoritarian control.


  • People like to push the negative human qualities onto theoretical future A.I.

    There’s no reason to assume that it will be unreasonably selfish, egotistical, impatient, or anything else you expect from most humans.

    Rather, if it is more intelligent than humans from most perspectives, it will likely be able to understand more levels of nuance in interactions where humans fall back on monkeybrain heuristics that are damaging at every level.

    There’s also the paradox that keeps the most ethically qualified people away from positions of power, as they have no desire to dominate and demand or control others.

    I absolutely agree with you.




  • I’m no fan of meta, but a reminder that they are one of the best right now for keeping their AI developments more open and available. This is thanks to yann lecun and other researchers pressuring meta to keep their info on the subject more open.

    Are we looking to punish them for making their work accessible?

    Not to mention how important something like joint embedded predictive architecture could be for the future of alignment and real world training/learning. Maybe go after other foundation model developers to be more open, if we’re complaining about the inevitably public nature of some information within the mountainous datasets being used.

    Although I’m still of the mindset that the model intent matters more than the use of openly available data in training. I.E. I’ve been shouting about models being used specifically to predict and manipulate user interactions/habits for the better part of a decade. For your “customized advertisements” and the like.

    The general public and media interaction on the topic this past year has been insufferably out of touch.



  • So you also equate French nobility with nazi victims?

    Nobody is being called out for their race or sexual preferences or body they were given at birth. The people being targeted here are defined only as the ones who have plundered the world to fill their pockets regardless of the cost. It is the very act of power hungry and despotic rule that leads to this call. Remember that every family member and friend we lose to the meat grinder could have lived happily if not for the ones who deem us unworthy of human treatment. Every home burnt to the ground as the fires worsen, because they refuse to let environmental care affect their overflowing coffers.

    How many more people should die and suffer for their wont?

    This is not a comparable situation, you silly person.



  • Can we talk more about deceptive patterns? I had my computer go down recently, and I was reminded just how bad mobile is.

    Pick a game at random and I’ll show you a dozen direct and intentional manipulations to get you into the habit and environment they want you to be in for optimal resource extraction. I miss when games were an artform rather than a human habit adjusting set of professionally designed manipulations that can annoy you into the right mindset to give money. Not to mention the advertisements which range from absolute fabrication to actual scam.



  • So we kill open source models, and proprietary data models like adobe are fine, so they can be the only resource and continue rent seeking while independent artists can eat dirt.

    Whether or not the model learned from my art is probably not going to affect me in any way shape or form, unless I’m worried about being used as a prompt so people could use me as a compass while directing their new image aesthetic. Disney/warner could already hire someone to do that 100% legally, so it’s just the other peasants im worried about. I don’t think the peasants are the problem when it comes to the wellbeing and support of artists


  • I mean, chess is already obsolete, but it’s also more popular than ever.

    To me there is extreme value in being able to choose your endeavor vs being forced into something agonizing just to survive.

    When everything is obsolete, people can create entire worlds and experiences using AI for themselves and for others who may care to experience it.

    The threat of needing to find something to do is one of the most frustratingly privileged concepts.

    I don’t need anything to do. I just want to be alive without also being exhausted, in pain, and chastised by customers despite working my hardest.

    I’d rather the struggle of finding an activity over worrying about whichever coworker is crying in the walk-in because just surviving requires more from them than they are capable of.

    Being obsoleted is fine by me, as long as we have the power redistribution necessary to keep people alive and happy.