![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/0943eca5-c4c2-4d65-acc2-7e220598f99e.png)
15 mph is plenty fast enough to belong in the bike lane. You’re good bro.
15 mph is plenty fast enough to belong in the bike lane. You’re good bro.
I guess I just don’t really know what feature-rich means in this context but being proprietary, not fully cross platform, and banned on most private trackers seems like huge downsides for power users compared to customization, built in search, and integrated chat.
I get this chart probably not made for people like me in mind though.
Yeah, seems weird that simple “it downloads torrents” client gets a D. It gets the job done, is easy to figure out, and doesnt fuck about with features I would never touch. Maybe thats not enough for a power user but for me its exactly what I want.
(but then why is Tixati in B, seems to have mostly downsides?)
Wdym? Pregnancy is the original lootbox, never know what kind of kids you’re gonna get.
Outside of the costs of hardware, its just power. Running these sorts of computations is getting more efficient, but the sheer amount of computation means that its gonna take a lot of electricity to run.
they know it’s impossible to do
There is some research into ML data deletion and its shown to be possible, but maybe not on larger scales and maybe not something that is actually feasible compared to retraining.
While you are overall correct, there is still a sort of “black box” effect going on. While we understand the mechanics of how the network architecture works the actual information encoded by training is, as you have said, not stored in a way that is easily accessible or editable by a human.
I am not sure if this is what OP meant by it, but it kinda fits and I wanted to add a bit of clarification. Relatedly, the easiest way to uncook (or unscramble) an egg is to feed it to a chicken, which amounts to basically retraining a model.
Always has been. The laws are there to incentivize good behavior, but when the cost of complying is larger than the projected cost of not complying they will ignore it and deal with the consequences. For us regular folk we generally can’t afford to not comply (except for all the low stakes laws that you break on a day to day basis), but when you have money to burn and a lot is at stake, the decision becomes more complicated.
The tech part of that is that we don’t really even know if removing data from these sorts of model is possible in the first place. The only way to remove it is to throw away the old one and make a new one (aka retraining the model) without the offending data. This is similar to how you can’t get a person to forget something without some really drastic measures, even then how do you know they forgot it, that information may still be used to inform their decisions, they might just not be aware of it or feign ignorance. Only real way to be sure is to scrap the person. Given how insanely costly it can be to retrain a model, the laws start looking like “necessary operating costs” instead of absolute rules.
The real AI, now renamed AGI, is still very far
The idea and name of AGI is not new, and AI has not been used to refer to AGI since perhaps the very earliest days of AI research when no one knew how hard it actually was. I would argue that we are back in those time though since despite learning so much over the years we have no idea how hard AGI is going to be. As of right now, the correct answer to how far away is AGI can only be I don’t know.
Five years ago the idea that the turing test would be so effortlessly shattered was considered a complete impossibility. AI researchers knew that it was a bad test for AGI, but to actually create an AI agent that can pass it without tricks still was surely at least 10-20 years out. Now, my home computer can run a model that can talk like a human.
Being able to talk like a human used to be what the layperson would consider AI, now it’s not even AI, it’s just crunching numbers. And this has been happening throughout the entire history of the field. You aren’t going to change this person’s mind, this bullshit of discounting the advancements in AI has been here from the start, it’s so ubiquitous that it has a name.
The feature is often not very well advertised, a pair of bt nc headphone I am looking at seem to not list it prominently despite being, imo, a pretty important feature. Searching by letter might not get you any accurate idea of what does and does not support multipoint.
Bought a pair from Zenni some 3 years ago for literally pennies (15$ for the frames, 10 for lenses). I have since carelessly snapped them (but keep elongating their lifespan unnaturally with super glue). Gonna buy my next pair from Zenni. I swear by them now for how cheap and durable these are, rarely had a pair of glasses survive 2 years before, and these were so much cheaper.
They also have regular people levels of quality, but I’m poor so it’s nice they have shit for people like me too.
To be honest, I too headed straight for the comments without reading the article. But I didn’t comment till I read it. It’s also not technically a crab either, despite being called one.
You are in a “bitch about reddit” community complaining about people bitching about reddit. Bruh, this is why this place was created, to bitch about reddit.
Your second point is valid, also this feature is to prevent spam from newly created accounts so why is this worthy to even complain about? New accounts shouldn’t be trusted as much as well established accounts, and it’s generally not that difficult to get enough karma just by commenting. For humans it does not pose that big of a barrier to entry, for bots it poses some.
I can’t log in to this. Is this intentional or is something broken?
We don’t understand it because no one designed it. We designed how to train a nn, we designed some parts of the structure, but not the individual parts inside. For the largest LLMs there are upwards of 70 billion different parameters. Each being individual numbers they were can tweak. The are just too many of them to understand what any individual one does, and since we just left a optimization algorithm do it’s optimizing we can’t really even know what groups of them do.
We can get around this, we can study it like we do the brain. Instead of looking at what an individual part does, group them together and figure out how they group influences things (AI explanability), or even get a different NN to look at it and generate an explanation (post hoc rationale generation). But that’s not really the same as actually understand what it is actually doing under the hood. What it is doing under the hood is more or less fundamentally unknowable, there is just to much information and it’s not well organized enough for us to be able to understand. Maybe one day we will be able to abstract what is going on in there and organize it in an understandable manner, but not yet.
One thing to note is that making an industry more efficient (like translating, which gpt is really good at, much better than google translate but not necessarily better than existing tools) comes with a decrease in the amount of jobs. Tech doesn’t have to eliminate the human portion, but if it even makes one more human twice as efficient in their job, thats half the humans you need doing that job for the same amount of work output.
That being said this is not a great infographic for this topic.
Who else would post such stuff? Regular users, other mods? If a sub has no mods the admins have to step in. So this distinction you are making serves no purpose.
This bot account is actually making recent posts, why the fuck is there a pic of these months old one?
Well, the record high temperatures are what cause the forest fires so we do have to take that into account. And the radiant heat that the fire gives off dissipates with the inverse square law so that limits it’s contribution. Really it seems that the only major contributing factor to the increased heat, other than the effects of the already high ambient temperature and thus the decreased apparent humidity, are the excitation of the air molecules as they are transformed from elemental oxygen and plant matter into hydrogen hydroxide and carbon dioxide, along with other molecules due to incomplete combustion and contaminates. Overall I think a safe bet would be 2.
Idk about anyone else but its a bit long. Up to q10 i took it seriously and actually looked for ai gen artifacts (and got all of them up to 10 correct) and then I just sorta winged it and guessed and got like 50% of them right. OP if you are going to use this data anywhere I would first recommend getting all of your sources together as some of those did not have a good source, but also maybe watch out for people doing what I did and getting tired of the task and just wanting to see how well i did on the part i tried. I got like 15/20
For anyone wanting to get good at seeing the tells, focus on discontinuities across edges: the number or intensity of wrinkles across the edge of eyeglasses, the positioning of a railing behind a subject (especially if there is a corner hidden from view, you can imagine where it is, the image gen cannot). Another tell is looking for a noisy mess where you expect noisy but organized: cross-hatching trips it up especially in boundary cases where two hatches meet, when two trees or other organic looking things meet together, or other lines that have a very specific way of resolving when meeting. Finally look for real life objects that are slightly out of proportion, these things are trained on drawn images, and photos, and everything else and thus cross those influences a lot more than a human artist might. The eyes on the lego figures gave it away though that one also exhibits the discontinuity across edges with the woman’s scarf.