this implies that servers should focus, like on news, tech, extreme sports, or whatever. rather than trying to be the hub for all topics the way reddit and others are?
this implies that servers should focus, like on news, tech, extreme sports, or whatever. rather than trying to be the hub for all topics the way reddit and others are?
Read dragons egg. It’s a little dated but a great story.
Naw, terrorists put jalapeños on their Hawaiian pizza. This is just a normal pizza lover.
I second this. Its horrible and sad that so many people are evil, or more to the point, can be led to an evil position which they’ll embrace and defend with all their intellect.
One of the scariest things about this shift is a realization that this is how countries do horrible things. A mob of excited people can make group decisions, and will follow horrible leaders because, well, because people can driven by emotions and group-think and localized social norms.
If the current Republican Party maintains its cohesion and membership I think what has been a generational lapse into authoritarian insanity could be a permanent shift in the American psyche. Thats even more terrifying than seeing so many people vote for Trump.
I think FreeLikeGNU has a point here… the happier America as described has generally only been a reality for a subset of the population. Can we really suggest that is/was the ‘character [of] America’ as a whole?
The whole “MAGA” thing feels related to this point. Its like a large group of American’s feel the oppression, fear and lack of optimism and, in their anger and frustration, have embraced a view that what made America great was the division and exploitation rather than the optimism.
I’d argue causality — that they were purposefully led to that view by exploitative fuckwad Republican leadership that cared about Party more than the country and who used the fear, and exploited the crisis, to gain and maintain power and now don’t want to give it up. But we don’t really need to understand why or who led that change to also step back and be sad that the change happened.
I think the legal teams are out of control and have a vendetta against the tech because chatGPT passed the bar exam. I’m kinda sick of their blanket bans paired with “but learn to use it” BS.
This is a stupid policy statement. The tech will be used by teams just the same way autocorrect or grammar checkers are used — simply because it’ll be baked into all their editing tools within a couple of years. Similarly they’re going to keep using licensed image libraries and will transition to AI generated images as their licensed libraries transition to the tech.
Not just energy density but also how quickly we can refill it.
And to make matters worse I’m not spending free moments learning more and more computer science — but am instead reading internet stuff about material science. I can enjoy my wide ranging brain but agree that sometimes I wish I was more focused.
“stock will become worthless”
I’m thinking the opposite might happen.
If big companies succeed in capturing the knowledge workers market share and transferring all those salaries into their own profits then it will be reflected in the stock prices of those big companies. People, mostly currently rich people, who own those stock will benefit.
Same as it ever was for other forms of automation or job outsourcing. Why would this be any different?
I will lose sleep over the nut jobs that fantasize about shouting up their local town police force to “rescue trump”.
These are the same types of people that caused the civil war. I don’t the great sort is at that point yet - but it worries me that they are willing to destroy the country to (try and) maintain their position of power over others.
I don’t see how we defuse and resolve those conflicts. They’ve been going on for a long time.
How does this play out? I suppose the three options are pure AI content, mixed, or pure human. At a guess the unions/guilds will do their best to nix mixed. So let’s assume the extremes.
It’ll be interesting to see if consumers are willing to watch mostly our ai generated stuff and /or pay extra to see live humans act out stuff written by humans.
Did you say, “Size doesn’t matter”?
(FYI - in hear this excuse all the time at a large company. Somehow our complexity and scale is always an excuse people reach toward. And, as you say, our job from infosec is to shut that whining down.
Good video.
In summary we should leverage the strengths of LLMs (language stuff, complex thinking) and leverage the strengths of knowledge graphs for facts.
I think the engineering hurdle will be in getting the LLMs to use knowledge graphs effectively when needed and not when pure language is a better option. His suggestion of “it’s complicated” could be a good signal for that.
if the hate fits… wear it
I’m looking forward to research on that.
I’ve an impression that “people are even nastier than before” has been a result of Trump era politics which reveled in nastiness — which itself appeared to be a pushback from nasty people about Obama being president. Basically its been a growing divide and was made a lot worse when such a prominent political group doubled down on divisiveness as a tribal identity.
I think it predated covid, which certainly made things worse, but I don’t really know what the cause was.
Have you considered that the full text is:
“A well regulated Militia, being necessary to the child killing and fascist racist pig State, the right of the people to keep and bear Arms, shall not be infringed.”
The conservative deep state keeps that definition of “free” in the DL.
/j since humor is hard online
does this ban block you from (continuing) to change your existing comments?
If a person writes a fanfic harry potter 8 it isn’t a problem until they try to sell it or distribute it widely. I think where the legal issues get sticky here are who caused a particular AI generated Harry Potter 8 to be written.
If the AI model attempts to block this behavior. With contract stipulations and guardrails. And if it isn’t advertised as “a harry potter generator” but instead as a general purpose tool… then reasonably the legal liability might be on the user that decides to do this or not. Vs the tool that makes such behavior possible.
Hypothetically what if an AI was trained up that never read Harry Potter. But its pretty darn capable and I feed into it the entire Harry Potter novel(s) as context in my prompt and then ask it to generate an eighth story — is the tool at fault or am I?
LLMs are just regurgitating shit AND that’s most of what we do all day too.
(Speaking from a job as an innovator in a high tech field. Most of us are just doing engineering w/ concepts invented elsewhere. )