Advertisment

Importance of AI and ethics: Common challenges and how to overcome them

author-image
DQINDIA Online
New Update
IIT Madras

Every time we speak of Artificial Intelligence and ethics, the issues seem to largely spring from a moral standpoint. And it’s probably true to an extent. Most examples revolve around morally problematic issues arising as a result of the algorithms that build the AI. While it’s true that AI has positively transformed the way we live and conduct our businesses, it does come with its own set of ethical challenges that need to be addressed.

Advertisment

In 2021, the National Centre for Biotechnology Information (NCBI) released a book around the ethical issues of AI, discussing  the development, deployment, and use of AI. And in 2019, the Gradient institute published a white paper outlining the practical challenges for ethical AI, capturing four broad categories:

Setting the right intent

AI systems are trained on data and do not have context outside of that particular data set. Inherently, AI does not come with a moral compass or any understanding of the implications of their outputs. It has no frame of reference of what is fair or unfair unless the designers define it for them. It is therefore imperative that designers develop a representation of the aim behind a particular system’s design. This entails defining, evaluating, and measuring ethical factors while balancing them with the system’s performance goals. 

Advertisment

Artificial Intelligence system design

An AI system should be designed with bias, causality, and uncertainty in mind.

Where possible, biases need to be identified and either reduced or eliminated from data sets. Eliminating biases with regard to gender, race, nationality, etc. is important, especially in a world that is so interconnected. Let’s take a look at interview processes - when the system disregards gender, it may unfairly assess a female applicant’s gap in working experience negatively when in truth, that gap was due to a valid reason of taking time off to care for her family.  To overcome this, proxy features can be utilised. Proxy features allow for information to be inferred, even if protected features like gender are removed. Example of such features include training an interview screening model using education data that contains gender information.

Advertisment

Bias, however, is not just a problem arising from data; there are several other factors involved. There could be cognitive biases from the designers themselves, a lack of complete data, and low diversity of perspectives among AI professionals. Even being fair or unfair is relative and there can be different meanings for different things for different people. Model design can also be a source of bias too, and de-biasing data or AI models is a rather daunting task.

Another context-sensitive problem that needs to be examined is the distinction between causation and correlation of variables. To ensure there are no harmful consequences in adjacent systems, the causal effect of systems needs to be modelled, especially in cases where AI replaces human decision-making. Take for instance the case of an AI system that helps hospitals prioritise patients who are admitted for emergency treatments. The AI model might miss considering the patient’s past medical history of diabetes or cholesterol or asthma. This can in turn result in the AI system incorrectly predicting that the patient has a lower risk profile when these would have been factored into consideration by a doctor.

While we may trust the AI systems we create, the predictions that the system makes still comes with a certain level of uncertainty. Here’s where human oversight is important.

Advertisment

Human judgement and oversight

As impressive as AI systems can be in dealing with complex and large amounts of data and decision-making, limitations exist. It is crucial that the system is trained on good quality data or this can pose dire consequences. For example, using AI for the creation of autonomous weapons or AI in healthcare. It might be a great idea for the industry, and society, to think that replacing soldiers with AI or doctors with robotics will be beneficial to us. However, while AI can work tirelessly and diagnose patients accurately or serve as a line of defence using unmanned drones and weaponry, it still lacks emotional intelligence. AI systems rely fully on data for their predictions and outcomes. If the system decides a certain course of action, it could potentially mistakenly trigger armed hostilities , or fail to address the emotional state of a patient to withstand the designed course of treatment.

The most effective systems are ones that intelligently bring together both human judgement and AI, taking into account model drift, confidence intervals and impact, as well as level of governance.

Advertisment

a) Model Drift

Model drift is the decay of a model's predictive power as a result of alterations in the environment. In short, it is when a model starts to lose its predictive power.

To prevent this and maintain the system’s performance and fairness, it helps to keep a regular check on important metrics and statistical distributions and set up alerts to notify the designers when either of these significantly drift. Example of key metrics include- accuracy, precision and F-score. The choice of metric depends upon the nature of the problem.

Advertisment

b) Impact and confidence intervals

It is clear that AI systems have become an integral part of our businesses and our daily lives, especially for decision-making across various applications. Some of these applications are more complex and  require human intervention - like whether or not to dismiss an employee - while others require minimum to no emotional considerations - like e-book or restaurant recommendations, or where to buy shoes from, etc. 

In addition to the potential impact of AI systems, we must assess the level of confidence in predictions made by these systems, to ensure that humans are alerted and brought into the process in an efficient manner. Systems that deliver predictions with low confidence but substantial impact should be subjected to a higher level of human scrutiny, and have capacities to track and alert based on such scenarios.

Advertisment

c) Governance

It is critical that there is centralized governance throughout the organization to ensure that best practices are being followed. This includes guidance on algorithms, testing, quality control, and reusable artefacts. Whether  data scientists and engineers fit in is a centralised or a distributed organisational where they can be involved in cross-functional teams and collaboration, centralised governance can still be achieved for ideal outcomes.

These capabilities also enable companies to conduct spot-checks and determine the model’s performance and suitability based on prior data and challenges.

Regulation

The convergence of organisational, industry and country or regional regulations will serve as a foundation for governance efforts across the entire data lifecycle. This includes the nature of data that is collected, how it is translated and utilized, as well as who and for what purpose it is used, until it is eventually discarded.

Being able to both influence and swiftly respond to regulatory change is crucial for businesses to remain at the forefront of innovation. Hence, companies need to develop strong internal capabilities and a deep understanding of regulation and accreditation, as well as work with similar technology business partners.

Besides waiting for legislation to be imposed on them, businesses have the option of proactively engaging with their internal stakeholders to develop  standards to govern the AI models they create.

We’ve established that technologies like AI and Machine Learning (ML) have become an integral part of the digital world we live in. As observed by NCBI in their study, ethical issues that arise as a result of AI include security issues, lack of transparency and privacy, and loss of human decision-making. Businesses are increasingly aware of these issues and are working to navigate around it. Three ways to do so is to promote transparency and make AI accessible in order to reduce the disconnect people currently feel when it comes to using AI technology, create a diverse and more inclusive team of AI developers to introduce cross-culture perspectives into AI models, and encourage the developers to explain the mechanism and thought process behind their AI algorithms to instill a sense of trust in it.

There is also a need to manage the governance around AI development and deployment in the corporate landscape. According to a study by The World Economic Forum, titled “The AI Governance Journey”, there is   growing awareness that the lack of proper regulations and policies  to guide the development of AI could disproportionately disrupt livelihoods, amplify social inequities, entrench existing biases, or result in a reduction in privacy. All of which could potentially weaken the public’s trust in AI and quickly derail its potential to benefit society. While governments are working to update their policies and recommendations to businesses around the use of AI, business leaders can take a proactive effort by implementing their own  policies to govern the AI models they create. One way is by adopting available services within the industry, such as Cloudera Machine Learning that is coupled with robust data governance features of the Cloudera Shared Data Experience. This service allows tracing of model lineage, as well as greater visibility, explainability, interpretability and reproducibility. All of these are essential in AI model governance which is a cornerstone in supporting the ethical use of AI for an organization.

The author is Piyush Agarwal, SE Leader, India, Cloudera

Advertisment