Forum Posts

shopon54536
Apr 10, 2022
In General Discussions
In fact, it will give us a perfect prediction: if the predominant color of the animal's body is black, it will be a dog; and if it is white, it will be a cat. Now let's imagine that in our test set there is a subtle difference: white dogs appear. What do you think will happen to the predictions about the white dogs? The system will almost certainly assign them the “cat” tag incorrectly, resulting in lower performance for this subset of the target population. Taking these factors into account when training artificial intelligence systems based on machine learning is key if we want to avoid algorithmic bias in several ways. Let's look at some examples. On data, models and people A few years ago, I came across, on Whatsapp Mobile Number List of colleagues, an article entitled « Alien is Sexist and Racist. It's Time to Make it Fair»7[Artificial intelligence is sexist and racist. It's time to make her fair], by James Zou and Londa Schiebinger. The article discussed an aspect that I hadn't really stopped to think about until that moment regarding the AI ​​models that I was implementing myself: these models can be sexist and racist. In other words, they can acquire a bias that leads them to present uneven performance in groups characterized by different demographic attributes, which results in unequal or discriminatory behavior. And one of the reasons behind this behavior was precisely the data used to train them.
0
0
1
 
shopon54536
More actions