Summary
-
The Biden-Harris administration has secured voluntary commitments from eight tech companies to develop safe and trustworthy generative AI models.
-
The companies include Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability AI.
-
The commitments only cover future generative AI models, which are models that can create new text, images, or other data.
-
The companies have agreed to submit their software to internal and external audits, where independent experts can attack the models to see how they can be misused.
-
They have also agreed to safeguard their intellectual property, prevent the tech from falling into the wrong hands, and give users a way to easily report vulnerabilities or bugs.
-
The companies have also agreed to publicly report their technology’s capabilities and limits, including fairness and biases, and define inappropriate use cases that are prohibited.
-
Finally, the companies have agreed to focus on research to investigate societal and civil risks AI might pose, such as discriminatory decision-making or weaknesses in data privacy.
The article also mentions that the White House is developing an Executive Order and will continue to pursue bipartisan legislation “to help America lead the way in responsible AI development.”
It is important to note that these commitments are voluntary, and there is no guarantee that the companies will follow through on them. The White House’s Executive Order and bipartisan legislation would provide stronger safeguards for generative AI.
Additional Details
- The White House is most concerned about AI generating information that could help people make biochemical weapons or exploit cybersecurity flaws, and whether the software can be hooked up to automatically control physical systems or self-replicate.
Comment
-
Haha, let them self-regulate, just like the financial industries regulate themselves, or became the heads of the agencies that regulate these things. See how that will turn out.
-
Responsible AI would always include, our AI models would kill you faster than you can blink and hack your systems faster than you can move your fingers.
“We promise to not let it fall into the wrong hands.” -The company that’s been massively hacked a few times
Which one lol?
I’d believe them if companies didn’t turn their back on their “don’t be evil” policies.
I was looking at this title for too long wondering who big tech pinky is
Big Tech Pinky and the Brain
🧠 🧠 🧠 🧠.
🧠 🧠 🧠 🧠.
🧠 🧠 🧠 🧠.
🧠 🧠… 🧠 🧠.
Make bad decisions and blame it on AI. Make good decisions and blame it on CEO.
I don’t understand, why isn’t Meta, Google, OpenAI, Microsoft, and Apple on there. Those are what could be considered the “main” AI companies.
I wonder if, for Meta, being open-sourced wouldn’t fit the company with the rest. Also, for now, it looks like a publicity stunt with no real teeth. Those more substantial AI companies maybe holding out for more favorable treatments.
prevent falling into wrong hands? palantir is the wrong hands, some of the worst you can get
Meanwhile, back in China…
Oh yes, that will make us all safer. There is no way to know how to do nefarious things except by using AI. This new regulation will save us all.
https://duckduckgo.com/?t=ffab&q=how+to+make+chlorine+gas+&ia=web