Lemmy.world is temporarily disabling open signups and moving to an application-required signup process, due to ongoing issues with malicious bot accounts.
We know this is a major step to take, but we believe that it’s the right one for both us and our community right now.
We’re working on a better long-term technical solution to these bots, but that will take time to create, test, and verify that it doesn’t cause any problems with federation and how our users use our site, and we’d rather make sure we get it right than have a site that’s broken.
We’re making this change on 28 Aug 2023, and don’t have a specific timeline for how long registrations will require an application, but we will post an update once our new anti-abuse measures are in place and working.
Take care, LW Team
Yeah i had the unpleasant encounter several times by now…
I’m guessing they’re not even flagging that shit as NSFW? I’ve been using liftoff and have the NSFW stuff hidden. I haven’t run into of it yet but that’s fucked up, hopefully it gets under control with this.
Maybe mods of each section can turn on manual approvals of submissions?
Manually approving submissions would be even more work. And shits being posted everywhere.
And no, the ones i had a unpleasant encounter with weren’t flaired nsfw.
Isn’t there a tool (possible free) by Google I think that detects abusive material like this?
https://protectingchildren.google/intl/en_uk/#introduction
Eh… I don’t think we should give up our privacy because one or two bastards are doing that shit…
Images posted to a public, federated platform should not count as private, in my opinion. When you upload something here, every federated server instantly gets a hold of it. What privacy is there to give up, then?
I agree, everything on Lemmy is public for all to see, that’s the nature of the Fediverse. Nothing here is really private, even vote counts since Admins of any self hosted server can see them, or Kbin which reveals them publicly for all.
Even DMs don’t have it, which is why it nags you to use Matrix for secure DMs.
To combat this until there is something in place to automate blocking it. Manually approval might just be the only way to deal with it for now. Places can add more moderators.
Manual approval would mean that mods have to see all that shit to block it… That’s not the right solution imo