“Wow Johnson, no matter how much biased data we feed this thing it just keeps repeating biases from human society.”
Sample input from a systematically racist society (the entire world), get systematically racist output.
No shit. Fix society or “tune” your model, whatever that entails…
Obviously only one of these is feasible from a developer perspective.
True, and it upsets me because we can’t even get a baseline agreement from the masses to correct systemic inequality.
…yet, simultaneously we’re investing academic effort into correcting symptoms spawned by the problem (that many believe doesn’t exist).
To put this another way. Imagine you’re a car mechanic, someone brings you a 1980s vehicle, you diagnose that it is low on oil, and in response the customer says, “Oil isn’t real.” That’s an impasse, conversation not found, user too dumb to continue.
I suppose to wrap up my whole message in one closing statement : people who deny systematic inequality are braindead and for whatever reason, they were on my mind while reading this article.
I’ll be curious what they find out about removing these biases, how do we even define a racist-less model? We have nothing to compare it to… another tangent, nope, I’m done. Zz.