Advertisment

Artificial Intelligence and Bias Removal: A Herculean Task

Artificial intelligence algorithms tend to become biased as they are heavily dependent on the datasets that is fed to them

author-image
DQINDIA Online
New Update
IIT Kharagpur

Google's algorithm for detecting hate speech tended to punish tweets written by African Americans, classifying 46% of non-offensive messages as hate language, even if they were not, according to a report by MIT Technology Review. How did it happen?

Advertisment

Algorithms that are focused on classifying text messages usually operate with keywords, which are included in lists known as bags of words. One of those words is, for example, the infamous “N” word, has strong racial or racist connotations in the United States, depending on who says it.

In fact, what the algorithm did not detect, however, is that the “N” word spoken by an African American has a completely different meaning than it would have if it were pronounced by a person who does not belong to that group. So the procedure tended to label an offensive message as not offensive at all.

Beyond this particular case, the biases into which machine learning systems and classification systems fall have begun to be reviewed with a magnifying glass because errors such as the one presented above do not infrequently occur.

Advertisment

According to Professor Ian Witten, the University of Waikato, New Zealand, most of the routines that work in solving mysteries and finding surprises about big data are organized around two objectives: the classification tasks and the regression tasks.

In the first case, that of classification tasks, against a certain event or case in the world, the system must assign a category. For example, if a person in a photo is a man or a woman, or if a certain sound corresponds to that produced by an electric guitar or a bass guitar, or if a message on Twitter is positive or negative based on polarity, which would be sentiment analysis.

The correct classification of an event or case allows many subsequent operations, such as extracting statistics, detecting trends or operating with filters: for example, to determine which hate messages should be removed from social networks.

Advertisment

But nothing is so simple. In order for the classification routines to operate, it is necessary to have a collection of previously collected cases, i.e. a dataset. If what you want is to classify the sex of certain faces, you must have images of faces, already manually labeled -supervised learning- that feed the algorithms to find the formulas to classify. The amount of data required is so much that these databases have started to be called big data.

As Curt Burgess, the University of California Riverside, said twenty years ago, "size does matter."But these data generated by the new digital day laborers usually come biased from the beginning. This is because, as Meredith Broussard indicates in Artificial Unintelligence, a book published by MIT Press, the data is extracted by humans or by machines parameterized by humans and is usually previously biased by human stereotypes.

Let us understand algorithm biases through the following case scenario: Some time ago, certain military experts built a system whose objective was to distinguish military vehicles from civilian vehicles. They chose a neural network approach and trained the system with images of tanks, tankers, and missile launchers, on the one hand, and normal cars, trucks, and trucks, on the other. After reaching reasonable results, they took the system abroad and it failed completely, with no better results than tossing a coin in the air. Knowing this, the experts redesigned the black box and they found that the military photos they used for training had been taken at dusk or dawn, while most civil photos were taken under brighter weather conditions. The system had learned the difference between light and dark.

Advertisment

Of course, the error made by the neural network of the military-civilian example can lead to disastrous wrong decisions. However, algorithm failures have a more dangerous value side, when it comes to classification tasks involving sexism or racism.

Similarly, when a person performs a Google search; when Facebook recommends tagging someone who is supposed to be recognized in a photo that has just been uploaded; when the video systems of the urban highway try to decipher the figure of a patent for a car that travels without TAG. In all these classification tasks, the results rely heavily on the data previously loaded into the system by humans. And if these loaded data already come with cultural biases, the results will be biased. At this point, the examples of algorithms that exacerbate prejudice or discrimination are innumerable and call into question the great promise of these systems: removing human error from the equation.

A whole area of current studies tries to tackle a solution to the biases of the machines, in the first place, returning to the qualitative and detailed human reviews with an attempt of prejudice, as Martin Lindstrom shows in his book ‘Small Data’. And secondly, going back over the original data to see what type of culture-dependent biases might have been filtered out when the data were reported. This is truly a herculean task.

Advertisment

The article has been written by Dr. Raul V. Rodriguez,

He can be reached on LinkedIn.

Advertisment