Advertisment

Human intervention is a must for artificial intelligence, Here’s why

Given the risks arising from artificial intelligence powered systems, it brings us to the core question of whether human still holds primacy

author-image
DQINDIA Online
New Update
artificial intelligence

In the current world, artificial intelligence (AI) is getting all pervasive. The use cases are on a rise. Sectors across the spectrum are leveraging AI for intelligent automation, customer insights, sales forecast, and many more such insights. The AI-powered algorithm is powering all these functions. McKinsey Global Institute research suggests that by 2030, AI could deliver an additional global economic output of $13 trillion per year. Now, wait for a second and think about the perception being created that AI-powered systems are the panacea to most business problems. It is also widely believed that AI-powered algorithms are self-sustaining and once developed, need minimum interventions. However, this perception is flawed. In multiple instances, AI is not only giving rise to a host of unwanted consequences but is also leading to life-threatening events. Technology is not ready yet. Maybe in few years down the line, we may have better reliability. Human talent plays a much larger role than many like to admit. 

Advertisment

Failed use cases:

Last month American real estate company Zillow put up the shutters on its home buying business and laid off 25 per cent of its workforce. The reason being the company’s AI-powered proprietary algorithm deployed through its digital interface ‘iBuyer’ terribly failed in predicting prices of the housing market. The platform was not able to predict housing market prices during the pandemic, which led to homes being sold at a discount to purchase price. This AI going wrong eventually resulted in a loss of $422 million for the digital real estate firm. 

In August last year, when the world was fighting the menace of the COVID pandemic, students in the UK had to go through significant mental agony. This was because of a faulty grading system powered through AI that evaluated the scores of students of the examination meant to get into a university. Reportedly, the AI-powered algorithm lowered the A-Level results of nearly 40 per cent of students who could not sit exams owing to the pandemic. The faulty grading outcome led to a national uproar with protests from students. Prime Minister Boris Johnson blamed the fiasco on the AI-powered system calling it as "mutant algorithm". Eventually, the results were held back and announced after re-evaluation manually by teachers. 

Advertisment

A similar event occurred in 2016 when Microsoft launched its intelligent chatbot Tay that spewed out hate speech like ‘Hitler was right’ and ‘9/11 was an inside job’. Microsoft immediately pulled out the AI chatbot after this debacle and apologised for such an unintended outcome. “We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to bring Tay back only when we are confident, we can better anticipate malicious intent that conflicts with our principles and values,” the company had said in a statement.

Not only AI going wrong has led to reputational risks for companies and organisations, but there are also instances where companies had to incur a huge financial cost. It is a known fact that trading desks across the world are driven through AI-powered software now, replacing the manual punching of orders. However, companies like American brokerage firm Knight Capital had to pay a heavy price owing to a software glitch. In 2012, the errant software sent Knight on a buying spree of stocks at a total cost of around $7 billion. This forced Knight Capital to sell its unwanted position to Goldman Sachs, which cost it a staggering blow of $440 million. The brokerage firm never recovered from the hit and was eventually acquired by rival Getco LLC. It shows what a wrong algorithm can do to an enterprise. There are hundreds of such examples wherein AI experiment has gone horribly wrong. From Uber’s autonomous car crash to Amazon’s recruitment tool to shutting down of Genderify platform last year, the world is replete with events where AI-powered systems have miserably failed to give the intended outcome.

It’s the man behind the machine:

Advertisment

Given the risks arising from AI-powered systems, it brings us to the core question of whether human still holds primacy in any technology-powered solution. And the answer is a resounding yes. It’s still the man behind the machine. Research indicates that the multiple reasons behind the failure of AI projects include the lack of proper skills, limited understanding of the technology within the company, budget limitations, and many more such factors. Any AI system is as good as the data which is fed to it. And the quality of data fed to the system and the way the algorithm is trained determine the success of the AI system. Therefore, engineers designing and developing the AI system play a critical role in getting tangible results. Moreover, past rolling out of AI-based software shows that biases of human beings designing the system usually creep into the solution and reflect in the behaviour of the system. So, it is critical to put unconscious biases at bay when training the algorithm. COVID-19 like black-swan event highlights how easy it is to overlook (or not been able to foresee) some scenarios while building the models and the devastating consequences of such limiting thoughts.

Against this backdrop, any organisation deploying AI-powered software systems should have the technological bandwidth to do so. If these capabilities are not available internally, a lead technology partner with sound experience in designing, developing, and deploying such software solutions should be selected. 

Know the beast:

A successful outcome from AI-powered projects hinges on knowing the technology well. Organisations also require skilled professionals internally or a lead technology partner to implement AI-powered systems seamlessly. As privacy violations, discrimination, accidents, and manipulation of political systems among others are the disastrous outcomes of deploying faulty AI-driven solutions, enterprises should take all preventive measures to mitigate such risks. No doubt, the coming decade belongs to AI, and organisations will reap the maximum benefit if they know the technology well.

The article has been written by Sanjeev Dahiwadkar, Founder and CEO of ITShastra

Advertisment