- cross-posted to:
- technology@beehaw.org
- cross-posted to:
- technology@beehaw.org
Put something in robots.txt that isn’t supposed to be hit and is hard to hit by non-robots. Log and ban all IPs that hit it.
Imperfect, but can’t think of a better solution.
Good old honeytrap. I’m not sure, but I think that it’s doable.
Have a honeytrap page somewhere in your website. Make sure that legit users won’t access it. Disallow crawling the honeytrap page through robots.txt.
Then if some crawler still accesses it, you could record+ban it as you said… or you could be even nastier and let it do so. Fill the honeytrap page with poison - nonsensical text that would look like something that humans would write.
I think I used to do something similar with email spam traps. Not sure if it’s still around but basically you could help build NaCL lists by posting an email address on your website somewhere that was visible in the source code but not visible to normal users, like in a div that was way on the left side of the screen.
Anyway, spammers that do regular expression searches for email addresses would email it and get their IPs added to naughty lists.
I’d love to see something similar with robots.
Yup, it’s the same approach as email spam traps. Except the naughty list, but… holy fuck a shareable bot IP list is an amazing addition, it would increase the damage to those web crawling businesses.
robots.txt is purely textual; you can’t run JavaScript or log anything. Plus, one who doesn’t intend to follow robots.txt wouldn’t query it.
If it doesn’t get queried that’s the fault of the webscraper. You don’t need JS built into the robots.txt file either. Just add some line like:
here-there-be-dragons.html
Any client that hits that page (and maybe doesn’t pass a captcha check) gets banned. Or even better, they get a long stream of nonsense.
Nice idea! Better use
/dev/urandom
through, as that is non blocking. See here.That was really interesting. I always used urandom by practice and wondered what the difference was.
I wonder if Nginx would just load random into memory until the kernel OOM kills it.
I actually love the data-poisoning approach. I think that sort of strategy is going to be an unfortunately necessary part of the future of the web.
As unscrupulous AI companies crawl for more and more data, the basic social contract of the web is falling apart.
Honestly it seems like in all aspects of society the social contract is being ignored these days, that’s why things seem so much worse now.
It’s abuse, plain and simple.
Governments could do something about it, if they weren’t overwhelmed by bullshit from bullshit generators instead and lead by people driven by their personal wealth.
Well the trump era has shown that ignoring social contracts and straight up crime are only met with profit and slavish devotion from a huge community of dipshits. So. Y’know.
We need laws mandating respect of
robots.txt
. This is what happens when you don’t codify stuffAI companies will probably get a free pass to ignore robots.txt even if it were enforced by law. That’s what they’re trying to do with copyright and it looks likely that they’ll get away with it.
It’s a bad solution to a problem anyway. If we are going to legally mandate a solution I want to take the opportunity to come up with an actually better fix than the hacky solution that is robots.txt
Wow I’m shocked! Just like how OpenAI preached for “privacy and ethics” and went deafly silent on data hoarding and scraping, then privatizes their stolen scraped data. If they insist their data collection to be private, then it needs regular external audits by strict data privacy firms just like they do with security.
I would be shocked if any big corpo actually gave a shit about it, AI or no AI.
if exists("/robots.txt"): no it fucking doesn't
Robots.txt is in theory meant to be there so that web crawlers don’t waste their time traversing a website in an inefficient way. It’s there to help, not hinder them. There is a social contract being broken here and in the long term it will have a negative impact on the web.
hmm, i though websites just blocked crawler traffic directly? I know one site in particular has rules about it, and will even go so far as to ban you permanently if you continually ignore them.
Detecting crawlers can be easier said than done 🙁
i mean yeah, but at a certain point you just have to accept that it’s going to be crawled. The obviously negligent ones are easy to block.
TIL that robots.txt is a thing
what is it?
Also, by the way, violating a basic social contract to not work towards triggering an intelligence explosion that will likely replace all biological life on Earth with computronium, but who’s counting? :)
That would be a danger if real AI existed. We are very far away from that and what is being called “AI” today (which is advanced ML) is not the path to actual AI. So don’t worry, we’re not heading for the singularity.
I request sources :)
https://www.lifewire.com/strong-ai-vs-weak-ai-7508012
Strong AI, also called artificial general intelligence (AGI), possesses the full range of human capabilities, including talking, reasoning, and emoting. So far, strong AI examples exist in sci-fi movies
Weak AI is easily identified by its limitations, but strong AI remains theoretical since it should have few (if any) limitations.
https://en.m.wikipedia.org/wiki/Artificial_general_intelligence
As of 2023, complete forms of AGI remain speculative.
Boucher, Philip (March 2019). How artificial intelligence works
Today’s AI is powerful and useful, but remains far from speculated AGI or ASI.
https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf
AGI represents a level of power that remains firmly in the realm of speculative fiction as on date
Ah, I understand you now. You don’t believe we’re close to AGI. I don’t know what to tell you. We’re moving at an incredible clip; AGI is the stated goal of the big AI players. Many experts think we are probably just one or two breakthroughs away. You’ve seen the surveys on timelines? Years to decades. Seems wise to think ahead to its implications rather than dismiss its possibility.
This is like saying putting logs on a fire is “one or two breakthroughs away” from nuclear fusion.
LLMs do not have anything in common with intelligence. They do not resemble intelligence. There is no path from that nonsense to intelligence. It’s a dead end, and a bad one.
See the sources above and many more. We don’t need one or two breakthroughs, we need a complete paradigm shift. We don’t even know where to start with for AGI. There’s a bunch of research, but nothing really came out of it yet. Weak AI has made impressive bounds in the past few years, but the only connection between weak and strong AI is the name. Weak AI will not become strong AI as it continues to evolve. The two are completely separate avenues of research. Weak AI is still advanced algorithms. You can’t get AGI with just code. We’ll need a completely new type of hardware for it.
Before Deep Learning recently shifted the AI computing paradigm, I would have written exactly what you wrote. But as of late, the opinion that we need yet another type of hardware to surpass human intelligence seems increasingly rare. Multimodal generative AI is already pretty general. To count as AGI for you, you would like to see the addition of continuous learning and agentification? (Or are you looking for “consciousness”?)
That said, I’m all for a new paradigm, and favor Russell’s “provably beneficial AI” approach!
Deep learning did not shift any paradigm. It’s just more advanced programming. But gen AI is not intelligence. It’s just really well trained ML. ChatGPT can generate text that looks true and relevant. And that’s its goal. It doesn’t have to be true or relevant, it just has to look convincing. And it does. But there’s no form of intelligence at play there. It’s just advanced ML models taking an input and guessing the most likely output.
Here’s another interesting article about this debate: https://ourworldindata.org/ai-timelines
What we have today does not exhibit even the faintest signs of actual intelligence. Gen AI models don’t actually understand the output they are providing, that’s why they so often produce self-contradictory results. And the algorithms will continue to be fine-tuned to produce fewer such mistakes, but that won’t change the core of what gen AI really is. You can’t teach ChatGPT how to play chess or a new language or music. The same model can be trained to do one of those tasks instead of chatting, but that’s not how intelligence works.
Ah, AI doesn’t pose as danger in that way. It’s danger is in replacing jobs, people getting fired bc of ai, etc.
All progress comes with old jobs becoming obsolete and new jobs being created. It’s just natural.
But AI is not going to replace any skilled professionals soon. It’s a great tool to add to professionals’ arsenal, but non-professionals who use it to completely replace hiring a professional will get what they pay for (and those people would have never actually paid for a skilled professional in the first place; they’d have hired the cheapest outsourced wannabe they could find; after first trying to convince a professional that exposure is worth more than money)
It replaced content writers, replacing digital artists, replacing programmers. In a sense they fire unexeprieced ones because ai speeds up those with more experience.
Any type of content generated by AI should be reviewed and polished by a professional. If you’re putting raw AI output out there directly then you don’t care enough about the quality of your product.
For example, there are tons of nonsensical articles on the internet that were obviously generated by AI and their sole purpose is to crowd search results and generate traffic. The content writers those replaced were paid $1/article or less (I work in the freelancing business and I know these types of jobs). Not people with any actual training in content writing.
But besides the tons of prompt crafting and other similar AI support jobs now flooding the market, there’s also huge investment in hiring highly skilled engineers to launch various AI related product while the hype is high.
So overall a ton of badly paid jobs were lost and a lot of better paid jobs were created.
The worst part will be when the hype dies and the new trend comes along. Entire AI teams will be laid off to make room for others.
Your worry at least has possible solutions, such as a global VAT funding UBI.
Yeah I’m not for UBI that much, and don’t see anyone working towards global VAT. I was comparing that worry about AI that is gonna destroy humanity is not possible, it’s just scifi.
Seven years ago I would have told you that GPT-4 was sci fi, and I expect you would have said the same, as would have most every AI researcher. The deep learning revolution came as a shock to most. We don’t know when the next breakthrough will be towards agentification, but given the funding now, we should expect soon. Anyways, if you’re ever interested to learn more about unsolved fundamental AI safety problems, the book “Human Compatible” by Stewart Russell is excellent. Also “Uncontrollable” by Darren McKee just came out (I haven’t read it yet) and is said to be a great introduction to the bigger fundamental risks. A lot to think about; just saying I wouldn’t be quick to dismiss it. Cheers.
I don’t think glorified predictive text is posing any real danger to all life on Earth.
Until we weave consciousness with machines we should be good.
good. robots.txt was always a bad idea
Like so many terrible ideas, it worked flawlessly for generations
🤣🤣🤣🤣🤣🤣🤣 “robots.txt is a social contract” 🤣🤣🤣🤣🤣🤣🤣 🤡
If you have something to say, actually explain it instead of the obnoxious emoji spam.
It’s completely off-topic, but you know 4chan filters? Like, replacing “fam” with “senpai” and stuff like this?
So. It would be damn great if Lemmy had something similar. Except that it would replace emojis, “lol” and “lmao” with “I’m braindead.”
That would be amazing.
Removed by mod
No laws to govern so they can do anything they want. Blame boomer politicians not the companies.
¿Por qué no los dos?
Fhdj glgllf d’‘’‘’'×÷π•=|¶ fkssb
No Idea why you’re getting downvotes, in my opinion it was very eloquently said