Advertisment

Are we expecting too much from artificial intelligence?

The use of artificial intelligence across different business functions has been increasing. Organisations planning to hike their AI investments.

author-image
DQINDIA Online
New Update
GPT 4

Over the years, the use of artificial intelligence across different business functions has been increasing. From bots and digital assistants performing routine tasks to online recommendation engines, high-end data aggregators, and customer-focused tools enabling critical business decisions, AI apps are seen all around. Therefore, it is not surprising to witness a sudden spurt in AI investment as the COVID-19 pandemic gradually recedes.

Advertisment

A recent Gartner poll of business and IT professionals reveals that 24% of the respondents have increased their AI investments, while 75% of respondents will continue or start new AI initiatives over the next few months as they move into the post-pandemic and renew phase. “Enterprise investment in AI has continued unabated despite the crisis,” says Frances Karamouzis, Research Vice President at Gartner.

“However, the most significant struggle of moving AI initiatives into production is the inability for organisations to connect those investments back to business value,” adds Karamouzis. Clearly, the investments in artificial intelligence have not been able to meet the expectations. McKinsey’s Global AI Survey report also shows that many companies plan to invest even more in AI in response to the COVID-19 pandemic and its acceleration of all things digital. However, when it comes to mitigating the risks of AI, most companies still have a long way to go.

While AI-based apps have performed exceedingly well in certain aspects of business, there are many areas where they can play only a limited role. Therefore, it would be naïve to expect artificial intelligence to provide a magical solution for all critical business issues. Business and IT leaders must clearly understand the limitations of artificial intelligence before making any large investments.

Advertisment

Expectation vs. reality: the big gap

It is often seen that the expectations of top management driving the use of AI in an organisation are quite different from the understanding of end-users implementing it. According to the McKinsey report, some of the biggest gaps between AI high-performers and others are not only in technical areas, such as using complex AI-based modeling techniques but also in the human aspects of AI, such as the alignment of senior executives around AI strategy and adoption. This poses the biggest hurdle ineffective implementation of AI solutions.

Creating an effective AI strategy requires close collaboration with the business functions that are going to use it. Unless the end-users are fully involved in building the solution, they may not be able to solve real problems.

Advertisment

Limitations of data

The accuracy or success of an AI solution is also dependent on the quality of information that is fed into it. For example, if customer data is not collected properly or does not clearly reflect a trend, it would be difficult to draw inferences or make market predictions. This, in turn, would impact business planning and decision-making.

The data provided should also be relevant to the specific needs of the organisation otherwise even the most sophisticated AI solutions will not be able to provide the desired outcomes. In such cases, expecting AI to wade through any type of data and do all the work is only going to cause further disappointment. Even in cases where appropriate data is available, some amount of manual processing, curation, and expert intervention may be required.

Advertisment

Need for supervision

For many years, the proponents of artificial intelligence made us believe that AI-enabled robots are going to take over the world. But the reality is far from it. Despite the advent of driverless cars, smart bots, and other robotic tools, AI has not yet reached a stage where it can work without supervision. There have been cases in the past where AI bots picked up unethical or inappropriate words and used them in customer interactions. Therefore, it could be highly dangerous to leave them completely unattended.

In fact, many scientists maintain that despite AI’s spectacular performance in many areas, it may never be possible for it to replace human-level intelligence. Even the most powerful data mining tools with great processing abilities need the intervention of domain experts to interpret data and draw meaningful conclusions. Organisations that embark on digital transformation without the guidance and vision of experts often end up in confusion and frustration.

Advertisment

Intelligent or stupid?

Apart from relevant data inputs, AI solutions depend on the frameworks and algorithms that drive them. If not set up properly, they could throw up misleading results or stupid recommendations that could end up annoying the customers. For example, if you have once searched for some niche content, the search engine starts showing similar topics even if you are not interested. Haven’t we all seen how shopping sites start recommending groceries or toiletries just after you have purchased them, without realising that you may not need them again for some time?

Similar is the case with chatbots or personalised digital assistants that are programmed to answer a limited number of questions. They may be able to handle a large number of simple inquiries made by customers but can get quite frustrating if the customer has a slightly more complex problem.

Advertisment

The weak spots

Gartner predicts that by 2023, more than one in ten workers will seek to trick AI systems used to measure employee behaviour and productivity. Whit Andrews, Research Vice President at Gartner, explains, “Just as we’ve seen with every technology aimed at restricting its users, workers will quickly discover the gaps in AI-based surveillance strategies. Some may even see tricking AI-based monitoring tools as more of a game to be won than disrespecting a metric that management has a right to know.”

Moving towards the post-pandemic phase, many more organisations are looking at deploying AI-enabled systems to analyse employee activities the same way they were used earlier to understand customer behaviour. They could use login inputs, activity tracking, alerts or other sophisticated tools to measure employee productivity. As these tools become more prevalent, Gartner predicts that organisations will increasingly face workers who will seek to evade or fool AI systems by generating false or confusing data.

Advertisment

There is no doubt that artificial intelligence has tremendous potential and can be a great facilitator of digital transformation. AI tools can significantly cut down the time spent on monotonous and mundane tasks to free up more time for other critical business needs. However, it is equally important to set the expectations right and understand the limitations of AI. While artificial intelligence can be a great partner on the path to digitalisation, it should not be seen as a substitute or replacement for human intelligence and creativity. A clever bot can offer you good advice, but it will obviously lack kindness, empathy and human emotion!

Shweta Verma

Shweta is former Executive Editor, Dataquest, and an independent content development professional

Advertisment