Despite advancements in AI, new research reveals that large language models continue to perpetuate harmful racial biases, particularly against speakers of African American English.
True, and it upsets me because we can’t even get a baseline agreement from the masses to correct systemic inequality.
…yet, simultaneously we’re investing academic effort into correcting symptoms spawned by the problem (that many believe doesn’t exist).
To put this another way. Imagine you’re a car mechanic, someone brings you a 1980s vehicle, you diagnose that it is low on oil, and in response the customer says, “Oil isn’t real.” That’s an impasse, conversation not found, user too dumb to continue.
I suppose to wrap up my whole message in one closing statement : people who deny systematic inequality are braindead and for whatever reason, they were on my mind while reading this article.
I’ll be curious what they find out about removing these biases, how do we even define a racist-less model? We have nothing to compare it to… another tangent, nope, I’m done. Zz.
I suppose to wrap up my whole message in one closing statement : people who deny systematic inequality are braindead and for whatever reason, they were on my mind while reading this article.
In my mind, this is the whole purpose of regulation. A strong governing body can put in restrictions to ensure people follow the relevant standards. Environmental protection agencies, for example, help ensure that people who understand waste are involved in corporate production processes. Regulation around AI implementation and transparency could enforce that people think about these or that it at the very least goes through a proper review process. Think international review boards for academic studies, but applied to the implementation or design of AI.
I’ll be curious what they find out about removing these biases, how do we even define a racist-less model? We have nothing to compare it to
AI ethics is a field which very much exists- there are plenty of ways to measure and define how racist or biased a model is. The comparison groups are typically other demographics… such as in this article, where they compare AAE to standard English.
True, and it upsets me because we can’t even get a baseline agreement from the masses to correct systemic inequality.
…yet, simultaneously we’re investing academic effort into correcting symptoms spawned by the problem (that many believe doesn’t exist).
To put this another way. Imagine you’re a car mechanic, someone brings you a 1980s vehicle, you diagnose that it is low on oil, and in response the customer says, “Oil isn’t real.” That’s an impasse, conversation not found, user too dumb to continue.
I suppose to wrap up my whole message in one closing statement : people who deny systematic inequality are braindead and for whatever reason, they were on my mind while reading this article.
I’ll be curious what they find out about removing these biases, how do we even define a racist-less model? We have nothing to compare it to… another tangent, nope, I’m done. Zz.
In my mind, this is the whole purpose of regulation. A strong governing body can put in restrictions to ensure people follow the relevant standards. Environmental protection agencies, for example, help ensure that people who understand waste are involved in corporate production processes. Regulation around AI implementation and transparency could enforce that people think about these or that it at the very least goes through a proper review process. Think international review boards for academic studies, but applied to the implementation or design of AI.
AI ethics is a field which very much exists- there are plenty of ways to measure and define how racist or biased a model is. The comparison groups are typically other demographics… such as in this article, where they compare AAE to standard English.