Advertisment

Tackle bias to build trustworthy artificial intelligence systems

To facilitate training artificial intelligence systems on unbiased data, organizations can consider using smaller sets of training data

author-image
DQINDIA Online
New Update
IIT Madras

Businesses have embraced artificial intelligence with open arms. Ecommerce platforms are leveraging the technology to personalize customer experience and improve customer relations. Healthcare is using it for improved medical diagnosis, law enforcement agencies are using it to fight crime, and organizations are using it to screen potential candidates, and so many other sectors are using it. However, the other side of the coin is not that inspiring as artificial intelligence also has its share of risks.

Advertisment

Skewed data propagates bias

The biggest risk that artificial intelligence poses today is that of bias. This bias creeps in when the data used to train the mathematical models is skewed. Since the technology is completely data-driven, the biases in the data reflect in the output.

There are many instances where automated systems have produced sexist and racist outputs. This can be especially scary if government bodies or law enforcement agencies work with skewed data. Taking cognizance of the serious implications biased data can have on policing, the European Commission (EU) has suggested training the AI systems with unbiased data.

Advertisment

Use smaller data sets to train artificial intelligence systems

However, it is difficult to ensure that the data is 100% bias-free. Also often the AI systems are built before the data is cleaned. Therefore, as a measure to facilitate training AI systems on unbiased data, organizations can consider using smaller sets of training data, which can help significantly reduce bias.

Towards a more trustworthy artificial intelligence

Advertisment

Currently, there is a lot of debate around tackling bias in artificial intelligence. The EU's Ethics Guidelines for Trustworthy AI mandates that trustworthy AI should be lawful, ethical, and robust. These guidelines prescribe seven key requirements that the AI systems should meet. These include human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, nondiscrimination and fairness, environmental and societal well-being, and accountability.

Apart from the EU's guidelines, the Organization for Economic Cooperation and Development (OECD) has also released its Principles on Artificial Intelligence that has been adopted by 42 countries. These principles will form the basis of practical guidelines for implementation.

The article has been written by Neetu Katyal, Content and Marketing Consultant

She can be reached on LinkedIn.

Advertisment