/dq/media/media_files/2025/06/12/xZYOBdrizxz2aJ4w0zvh.jpg)
Here’s a strategic story for the new year: John is a salesman at a software firm and earns a salary of USD 2,000 a month. Owing to a general drop in software sales, the company announces that commissions on sales will be paid out only by midyear. John is therefore surprised to see that the company has credited USD 2,500 as salary for November 2024. He rejoices quietly and does not enquire why the additional amount was paid. He is alarmed, however, when he notices that his salary for December is only USD 1,500.
“I’m shocked to see that my salary was sliced in December,” John complains to the head of finance. “Why?”
“In November, you received an excess payment and didn’t complain,” the finance manager replies. “Why?”
“That’s because I’m a very forgiving man,” John says. “l always forgive the first mistake. But when you make a second mistake, that needs to be called out and corrected.”
While many corporate leaders understand at a high level that new skills are required to work with GenAI, their sense of how these changes might create value is often vague and underinformed.
If that quip made you giggle, these statistics should make you wriggle: AI could add between USD 2.6 trillion to USD 4.4 trillion annually globally in productivity gains, estimates a McKinsey study. “This would increase the impact of all AI by 15% to 40%,” the firm says. “This estimate would roughly double if we include the impact of embedding GenAI into software that is currently used for other tasks beyond the known use cases.”
For comparison, India’s GDP in 2023 was estimated at USD 3.55 trillion.
So far, so good. But every silver cloud now has a dark lining. For instance, concern about AI-enhanced malicious attacks topped Gartner’s emerging risk rankings in Q1 and Q2 2024. New concerns regarding soft ransomware targets are also coming to the forefront of enterprise risks.
“Similar to AI-enhanced malicious attacks, soft ransomware targets require minimal experience and cost to cause significant financial and reputational damage,” says Gamika Takkar, a Gartner research director. “In Q2 2024, Gartner surveyed senior risk executives. Three of the top five most cited emerging risks were in the tech category, and a new concern on soft ransomware targets entered the tracker for the first time. Escalating political polarisation held steady as the third most cited concern, while misaligned organisational talent profile moved up from the fifth to fourth most cited risk.”
Reputation Risks
Why bother about corporate reputation? Because corporate leaders need to proactively protect their organisation’s reputation from external and internal GenAI threats.
“From easily disseminated deepfakes spreading disinformation to impersonation attacks and unintentional employee misuse, the implementation of GenAI is riddled with risks,” warns Amber Boyes, a Gartner director-analyst. “It is vital that communications leaders establish effective guardrails in their organisation to balance the unprecedented opportunity with the significant external and internal reputational threats GenAI poses.”
Companies and governments can’t ignore the AI tsunami. “Over the past 18 months, organisations of all sizes and industries engaged in extensive hyper-experimentation with AI,” notes IDC (International Data Corp). “In 2025, we anticipate a shift from experimentation to reinvention. This shift will be driven by the introduction of AI agents, renovations in data, infrastructure, and cloud computing.”
Worldwide spending on AI-supporting technologies is on track to surpass USD 749 billion by 2028. As for 2025, about 70% of the projected USD 227 billion AI spending in 2025 will come from enterprises embedding AI capabilities into their core business operations, surpassing investments in cloud and digital service providers.
“In the evolving landscape of AI, the future hinges on our ability to not just experiment, but to strategically pivot—transforming experimentation into sustainable innovation,” says Rick Villars, IDC’s group vice president of research. “As we embrace AI, we need to prioritise relevance, urgency and resourcefulness to forge resilient enterprises that thrive in a data-driven world.”
What about the Indo-Pacific region? Nearly 60% of organisations in this region hope to realise the benefits of their AI investments within 2-5 years. Only 11% expect immediate returns within the next two years, according to a study commissioned by IBM and conducted by a Singapore-based digital research firm, Ecosystm.
“The focus of AI investments is shifting beyond employee productivity and customer experience towards broader strategic goals such as innovation and impact on company financials,” says Sash Mukherjee, Ecosystm’s vice president. “Traditional ROI metrics struggle with AI’s long-term, intangible benefits and high upfront costs. While proofs of concept (PoCs) validate feasibility, they often overlook scaling complexities and true costs. A holistic costing strategy involving business, tech, data and finance teams would be ideal to factor in hardware, software, and staff expenses throughout the project lifecycle.”
AI Trust Gap
The AI trust gap can be understood as the sum of the persistent risks (both real and perceived) associated with AI. Ergo, depending on the application, some risks are more critical. These cover both predictive machine learning and GenAI. According to the Federal Trade Commission, consumers are voicing concerns about AI, while businesses are worried about several near to long term issues.
A dozen AI risks that are among the most commonly cited are, in alphabetical order: Accountability and ethics, black box problems, concerns about bias and transparency, disinformation, environmental impact of AI, fear about safety and security, government overreach, hallucinations, instability, job loss, key unknowns, and LLM concentration by large industry players.
“Taken together, the cumulative effect of these risks contribute to broad public skepticism and business concerns about AI deployment,” Dr Bhaskar Chakravorti, dean of global business at The Fletcher School at Tufts University and author of The Slow Pace of Fast Change, wrote in the Harvard Business Review in May 2024. “This, in turn, deters adoption.”
For instance, radiologists hesitate to embrace AI when the black box nature of the technology prevents a clear understanding of how the algorithm makes decisions on medical image segmentation, survival analysis, and prognosis. Ensuring a level of transparency on the algorithmic decision-making process is critical for radiologists to feel they are meeting their professional obligations responsibly, but that necessary transparency is still a long way off.
Adopt comprehensive AI evaluation frameworks, assess financial metrics alongside broader impacts like job roles and data governance, and select the right use case.
The crux? While many corporate leaders understand at a high level that new skills are required to work with GenAI, their sense of how these changes might create value is often vague and underinformed. “So the decisions that seem bold on paper—such as buying hundreds of GenAI tool licenses for developers—are made without a clear understanding of the potential gains and with insufficient training of developers,” notes a McKinsey study. “The result: predictably poor outcomes.”
Important roles throughout the enterprise—from data scientists and experience designers to cyber experts and customer service agents—will need to learn an array of new skills. Businesses hoping to operate like software companies will also need to pay special attention to two key roles: the engineer and the project manager. “Too often, conversations focus on which roles are in or out, while the reality is likely to be more nuanced and messy,” McKinsey says. “Determining what skills matter to the business and its strategy is a long-standing leadership responsibility.”
Black Box Paradox
The black box problem is just one of many risks. Given similar issues across application situations and industries, one should expect the AI trust gap to be permanent, even as we get better in reducing the risks. This has three major implications, Dr Chakravorti writes.
- First, no matter how far we get in improving AI’s performance, AI’s adopters—users at home and in businesses, decision-makers in organisations, policymakers—must traverse a persistent trust gap.
- Second, companies need to invest in understanding the risks most responsible for the trust gap affecting their applications’ adoption and work to mitigate those risks.
- Third, pairing humans with AI will be the most essential risk-management tool, which means we shall always have a need for humans to steer us through the gap—and the humans need to be trained appropriately.
The bottom line for corporate leaders? Adopt comprehensive AI evaluation frameworks, assess financial metrics alongside broader impacts like job roles and data governance, and select the right use case. This involves a two-step process: prioritising with structured assessments and evaluating technical feasibility. That includes examining data usability, infrastructure, digital investments, process readiness, and resource needs.
The black box problem is just one of many risks. Given similar issues across application situations and industries, one should expect the AI trust gap to be permanent.
“Traditional ROI metrics struggle with AI’s long-term, intangible benefits and high upfront costs,” the IBM study notes. “While PoCs (proofs of concept) validate feasibility, they often miss scaling complexities and true costs. To address this, embrace more nuanced evaluation approaches by balancing tangible and intangible benefits. A holistic costing strategy should involve business, technology, data and finance teams to account for infrastructure, hardware, software and personnel expenses, right across the project lifecycle.”
Since we started with a strategic story, let’s end with a stupid one: On one weekday, a tipsy gentleman was hauled up in court for disorderly conduct in a public place. “Do you know why you’ve been brought here today?” the judge asked sternly.
“No, your honor,” the drunk replied. “The cops
told me that we’re going for a ride, and I accompanied them.”
The judge glared at him. “Stop swaying and stand up straight. You’ve been brought here for drinking.”
The inebriated man laughed. “Fantastic, your honor! When do we get started?”
Raju Chellam is a former Editor of Dataquest and is currently based in Singapore, where he is the Editor-in-Chief of the AI Ethics & Governance Body of Knowledge, and Chair of Cloud & Data Standards.
maildqindia@cybermedia.co.in