Advertisment

Understanding the quantum of human interference: A key component in usable AI

When it comes to artificial intelligence, it's the partnership between AI and the human that creates a success story

author-image
DQINDIA Online
New Update
AI

Over these past few months, the concept of “Human in the Loop” is witnessing a dramatic change. While most companies embarked on their AI journey believing that large AI systems would take over entire workstreams end-to-end, it has been experimentally proved that AI is a fantastic prediction machine, arguably the best prediction machine we will ever have, but judgment requires human engagement. And with that realization enterprises are redefining the ways of deploying AI that will elevate the role of humans, to train machines and drive domain context to technology. 

Advertisment

Hybrid is the answer to this – and the only way to successfully implement it is by finding the right mix of accuracy and risk. For instance, in cases of transactional finance processes, large corporations have successfully deployed AI --to extract information from invoices, compare and match purchase orders and invoices, and provide a recommendation on pending payments. Transactional finance policies can create a threshold, below which the payment gets made automatically, and if the payment is above the threshold, it defaults to a manual queue where a human applies judgment on the dollar amount. This hybrid approach allows finance departments to get to the right end point and make the AI solution better by using reinforcement loops. In some cases, where the accuracy is very high and the risk criteria is low – AI can completely automate it and humans are only involved in bettering the AI. In other instances, where the risks are higher – such as credit card fraud or cybersecurity, the human and AI interaction is very tight – and human analysts are involved in improving the AI algorithm at every step. And when the threshold is even lower or the risk is even higher, or when empathy is most important, such as in health-related fields – AI is simply used as a prediction engine to augment the human decision process.

Human in the loop

The greatest challenge in implementing AI in the enterprise is that no matter how great the computer vision, text extraction or pattern recognition algorithm is – the recommendation needs to be contextualized or nuanced based on its usage. For example, in pharmacovigilance – AI can easily be used to extract adverse events from doctors’ notes, phone recordings and social media posts; which can be used to spot a pattern in a large volume of health trend data; besides automatically classifying and reporting these adverse events to the regulators. But, it is not a good idea to run this process with an AI engine through a reinforcement loop to automatically promote the better model owing to potential risks and significant public health impact. Since the entire process needs to be regulatory compliant, enterprises should use two instances – one to run the current model and another one looking at the data to keep enhancing the model. The new model should be promoted only when the time is right.

Advertisment

Similarly, for internal HR processes, context and nuance are key when applying AI chatbots. For example, AI solutions can be efficiently used for employee engagement. Pre-pandemic, layers of management had to be trained by HR to get the pulse of their employees. During the pandemic, it was difficult to keep this process in place especially for organizations with large employee base such as ours, when they were working from home. An AI chatbot can be used to ask questions in a non-judgmental and standardized way, capturingwhere the hotspots are and where the employees are doing well. The real value of the data is in understanding why in some countries, employees reach a low point six months after joining whereas in other countries the issue does not exist. The AI cannot figure this out on its own. HR teams need to apply human judgment and use their contextual knowledge of onboarding policies in each location. That is where operating model and nuance comes into play.

It's the partnership between AI and the human that creates a success story. Ultimately, humans have to make the most of the information provided by AI and make a decision, often in a split second. We learnt this while refining an AI engine, for Envision Racing - a founding team in Formula E racing, the world’s first fully electric, international single-seater street racing - to sift through all the radio communications received during the race to remove all extra, irrelevant noise from multiple radio channels and feed the driver only with the relevant information for the race at that moment in time. This turned out to be incredibly helpful for the drivers in the race – who can focus all their attention on the racetrack, while getting all the right information they need to make those critical winning decisions. 

Augmented Intelligence

So as the journey of AI continues, the acronym should really be used to mean augmented intelligence – not artificial intelligence. The issue for a CIO is not whether AI is usable, but what’s the right way to set up an operating and organizational model that accurately leverages the power of AI for each individual business. This includes how to operationalize AI, what are the people processes involved and what are the governance methodologies to implement in the right workflow. Indeed, digital is easy, transformation is hard. It is the people and process that make-or-break digital transformation, and training the humans is as important as training the AI.

The article has been written by Sanjay Srivastava, Chief Digital Strategist, Genpact

Advertisment