And hopefully this will allow them to follow the 80/20 rule where the AI can do 80% of the grunt work and the human can concentrate on the 20% creative part.
Looks like maybe the note is down? I just get an empty list :(
There’s lots of alternate, free and open source syncing solutions. I use syncthing myself.
But the main result is achieved anyway, right? The picture that the system tried to download did not make it into the training set.
For one thing: when you do it, you’re the only one that can express that experience and knowledge. When the AI does it, everyone an express that experience and knowledge. It’s kind of like the difference between artisanal and industrial. There’s a big difference of scale that has a great impact on the livelihood of the creators.
I don’t think that Sarah Silverman and the others are saying that the tech shouldn’t exist. They’re saying that the input to train them needs to be negotiated as a society. And the businesses also care about the input to train them because it affects the performance of the LLMs. If we do allow licensing, watermarking, data cleanup, synthetic data, etc. in a way that is transparent, I think it’s good for the industry and it’s good for the people.
That’s always been the case, though, imo. People had to make time for art. They had to go to galleries, see plays and listen to music. To me it’s about the fair promotion of art, and the ability for the art enjoyer to find art that they themselves enjoy rather than what some business model requires of them, and the ability for art creators to find a niche and to be able to work on their art as much as they would want to.
Here are a couple of ideas:
I’m sure there’s more
Wow, none of the things you mentioned makes me want to use it.
Thanks for the explanation though!
I’m not sure, but OP specifies code being restricted to GPL, not all assets.
I’ve been using https://www.newsminimalist.com/ lately. Not really a community, but it serves its function pretty well.
I agree that with the current state of tools around LLMs, this is very unadvisable. But I think we can develop the right ones.
We can have tools that can generate the context/info submitters need to understand what has been done, explain the choices they are making, discuss edge cases and so on. This includes taking screenshots as the submitter is using the app, testing period (require X amount of time of the submitter actually using their feature and smoothening out the experience)
We can have tools at the repo level that can scan and analyze the effect. It can also isolate the different submitted features in order to allow others to toggle them or modify them if they’re not to their liking. Similarly, you can have lots of LLMs impersonate typical users and try the modifications to make sure they work. Putting humans in the loop at different appropriate times.
People are submitting LLM generated code they don’t understand right now. How do we protect repos? How do we welcome these contributions while lowering risk? I think with the right engineering effort, this can be done.