Advertisment

Earning trust in AI: Is it a technological challenge or socio-technological challenge?

For artificial intelligence to be responsible, organizations must consider and take decisions on the culture that is required

author-image
DQINDIA Online
New Update
Xoriant

“Ok, I will destroy humans.” This is what Sophia, a social humanoid robot, developed by the Hong Kong-based Hanson Robotics said during an interview at the SXSW technology conference in Texas way back in March 2016. This brought to light the debate on the role that Artificial Intelligence (AI) will play in our dystopian future, the dangers of it and the ethics that organizations investing in AI will have to follow to ensure it does not go rogue. However, the debate is far from resolved. Over the past few years, several leaders have expressed concern about the potential dangers of AI. 

Advertisment

This now begs the question – Is making AI trustworthy a technological challenge or a socio-technological challenge? Also, what level of trust can and should we place in these AI systems?

Few organizations have made tangible investments in ways to ensure trust and address bias. Four-in-five businesses cite being able to explain how their AI arrived at a decision as important to their business. However, many organizations are yet to take steps to ensure their AI is trustworthy and responsible. According to IBM’s Global AI Adoption Index 2022, 74% of organizations have not taken any steps to reduce bias, 68% have failed in tracking performance variations/ model drift, while 61% have not been able to explain AI-powered decisions.

Future of AI – Building Trust 

Advertisment

In spite of AI's ubiquitous and invisible nature, it is necessary that companies learn to build trustworthiness into their AI systems. In order to achieve this, one must ask fundamentally human-centric questions, such as what is the purpose of the AI model? What is the accuracy of the AI model? What is the fairness of it? Is it explainable? Would it affect one's livelihood if it made a decision. It is not possible to solve this socio-technical challenge with one tool.

To build trust in AI, we must instill a sense of morality in it, operate transparently, and educate consumers and businesses about the opportunities it can create. Simply put, to trust computer decisions, ethical or otherwise, people need to know how an AI system arrives at its conclusions and recommendations. 

Humans live by communally governed ethical norms, which are enforced through laws, rules, societal pressures, and public discourse. Although ethics have influenced decisions since early human civilization, they vary over time and across cultures. But an AI system must be trained appropriately to augment human behaviour.

Advertisment

Since data is an artefact of the human experience, all data is biased. If organizations are not recognizing this bias, then they are calcifying systemic bias into systems like AI. A big holistic switch is needed to make AI more trustworthy, especially since AI is becoming more pervasive. Without proper care in programming AI systems, the potential biases of the programmer will play a part in determining outcomes. Organizations must thus develop frameworks to address these issues. 

How to build trust in AI 

For AI to be responsible, organizations must consider and take decisions on the kind of culture that is required within the company to responsibly curate a trustworthy AI. They must also consider the processes that should be in place to ensure one is compliant and practitioners know what to do with AI engineering frameworks and tooling that can assist in the journey. 

Advertisment

With so many organizations now adopting AI, there has been a shift in those leading responsible/ trustworthy AI initiatives. Over three years ago, the initiatives were led by technical leaders. Now, a majority of those leading such initiatives are non-technical business leaders, such as Chief Compliance Officers, Chief Diversity and Inclusivity Officers, and Chief Legal Officers. Inclusive teams working on such solutions will ensure that we can trust AI in making wise choices. Hence, gender, race, ethnicity, age, neurodiversity etc. should be considered and AI frameworks should be adapted for systemic empathy. 

A lot of effort and investment must go into design thinking as a mechanism to create frameworks for systemic empathy, and this should be done before any code for AI is written. This alone will help organizations think about design to mitigate any potential harm, in line with not just the values of the organization, but also the rights of humans. This means that organizations must think of ethics and systemic empathy from the beginning and ensure that responsible AI is built at every step of the process. 

Making AI more inclusive 

Advertisment

Ethical considerations have never been more critical than they are today. People around the world, from business executives and front-line employees to government ministers and individual citizens, have been forced to weigh seemingly impossible trade-offs between economic and health imperatives, guided only by their ethics, morals, and values.

Preventing bias in datasets is largely a manual effort. A conscious effort from the AI community is one of the main methods that can be deployed for AI to augment a human being. While xenophobia, misogyny, and other biases, even if unintentional, can be obscured behind human rationales, studies suggest that AI can be designed with fairness, transparency, and even empathy. This, however, can be achieved only by diverse and inclusive teams. Thus, detecting and correcting bias in AI, and teaching technology to be more effective in relating to humans, may advance organizations’ abilities o work together and achieve greater outcomes.

The article has been written by Sameep Mehta – IBM Distinguished Engineer and Lead – Data and AI Platforms, IBM Research India

Advertisment