When we hear of Artificial Intelligence (AI), we might assume it is something that primarily concerns large technology companies and their business processes. We sometimes forget that AI, for some time now, has become an integral part of our everyday lives, right from unlocking phones through facial recognition, to enabling voice assisted searches, social media platforms, e-Commerce deliveries, and spell checks on documents and work emails. Even at an enterprise level, AI is being widely adopted across functions and processes to understand customers and churn out insights based on the data that comes in. This helps businesses evolve, stay ahead of competition, and strategize according to real-time customer demands. And ever since the pandemic hit, the adoption of AI has increased manifold in India. A report by IDC has stated that India’s AI market is expected to reach $7.8 billion by 2025, growing at a CAGR of 20.2%. According to the report, organizations were leveraging multiple AI applications such as customer relationship management (CRM), enterprise risk management (ERM), supply chains management, predicting demand, and improving return on investment (ROI).
While the industry is growing exponentially, it also amplified the need for organizations to outline well-defined ethics and governance.
Defining Ethical AI
While organizations are accelerating the pace of AI adoption across functions, a crucial focus is to ensure this AI is being “responsible” and ethical in practice. Ethical AI is putting in place systems and tools to govern the scope and operations of the technology. According to Gradient Institute, it is a multi-disciplinary effort to design and build AI systems that are fair and that improve our lives.
Importance of Ethical AI
Ethical AI systems should be designed with careful consideration of their fairness, accountability, transparency, and impact on people and the world.
Among the many reasons why data is becoming imperative across business processes, is the crucial role it plays in determining the quality of AI technology. AI advances have been made in building systems that are trained on data that an enterprise gathers, instead of systems that make decisions based on human-defined rules. Designers and/or developers make conscious decisions when creating systems with human-defined rules, and the ethical implications of these rules are usually more transparent. However, as AI systems are developed based on Machine Learning and Deep Learning, this could give rise to systems with no ethical considerations.
An important aspect in ensuring AI is being ethical is in training the datasets. It is imperative that the data being fed to the algorithm has taken into account every aspect of the concerned population, to ensure there are no gaps in the filtering process. For instance, some organizations fall back on AI for their recruitment process. From the applications that come in, the candidates are first shortlisted by AI, and then called in for interviews with the respective teams. And the AI is programmed to filter out resumes based on criteria that the recruiter has fed into the system. Imagine, due to incomplete datasets, if the system ends up unfairly penalising a specific group based on gender, educational background, their current location, etc. It could have a negative impact on members of that particular demographic and potentially to the company that is recruiting. It could also end up placing the company in violation of organisational or industry guidelines, or in some cases even the law.
AI bias in real life
AI applications are being adopted across sectors like healthcare, education, banking and finance, retail, etc. However, these systems often come with their own set of biases. AI systems are made to duplicate human intelligence and therefore tend to discriminate in a similar manner to humans. There are several reasons for this bias but the primary one is the data that these systems are fed with. Let’s take a look at a few examples of how AI has been biased against certain sections of the population.
In 2017, an AI robotic soap dispenser failed to distinguish dark-skinned hands. It only gave soap to persons with pale skin. In another example, AI used in courts had denied bail to people by looking at their pictures. AI used by some courts in the US to determine an offender’s chances of committing a crime would often go against African American defendants.
Like mentioned in the earlier instance, recruitment is another crucial area where AI biases often show up. In 2015, an AI system that was used to shortlist candidates for a job at a global tech company, gave lower priority to women for several roles. A likely explanation for this is that the system probably deduced that the job was fit more for males and reflected this in its recommendations.
Additionally, when highlighting the bias in data, researchers at Google Brain observed that while India and China have the highest population in the world, they account for only 3 percent of images in a popular dataset, ImagesNet. On the other hand, the US, which consists of only 4 percent of the world’s population, constitutes 45 percent of the images. As a result, regional views tend to often get underrepresented.
In light of this, the Indian government along with several other nations have been constantly working towards introducing policies that mitigate AI biases. And the key to achieving this is getting the algorithm right. Niti Aayog’s 2020 draft on Responsible AI was a good start to introducing ethical AI regulations in the country. It recommends sector-specific regulations, so they are not subject to the same rules. Additionally, in 2021, India launched a handbook- INDIAai, to help mitigate biases in AI. This handbook provides a framework for companies to follow while creating AI systems and algorithms. As organizations, both public and private, quickly adopt emerging technologies like AI and machine learning (ML), it is imperative that the technology is governed by an ethical framework in order to eliminate any potential biases against any specific sector of the population.
A service like Cloudera Machine Learning coupled with robust data governance features of the Cloudera Shared Data Experience allows tracing of model lineage, as well as greater visibility, explainability, interpretability and reproducibility. All of these are essential in AI model governance which is a cornerstone in supporting the ethical use of AI for an organisation.
The author is Piyush Agarwal, SE Leader, India, Cloudera