Advertisment

Why Enterprises Are Struggling to Walk the Talk on Responsible AI

Study by HCLTech and MIT Technology Review Insights highlights a significant gap in responsible AI adoption, with 87% of executives recognizing its importance but 85% feeling unprepared.

author-image
Aanchal Ghatak
New Update
Responsible AI
Listen to this article
0.75x 1x 1.5x
00:00 / 00:00

The world of AI is advancing at lightning speed, yet with regard to responsible AI, the enterprises are playing catch-up. A recent report from HCLTech and MIT Technology Review Insights reveals a stark paradox: whereas 87% of business executives recognize the critical need for responsible AI principles, a staggering 85% admit they are unprepared to implement them effectively.

Advertisment

Released during the World Economic Forum's Annual Meeting in Davos, the report, titled Implementing Responsible AI in the Generative AI Age, sheds light on the widening chasm between intent and readiness.

The Readiness Gap: What’s Holding Enterprises Back?

The study identifies several key obstacles hindering the adoption of responsible AI:

Advertisment
  • Complexity of implementation: Enterprises struggle with the technical intricacies of embedding responsible AI.
  • Lack of expertise: A shortage of skilled professionals is slowing progress.
  • Regulatory challenges: Navigating evolving compliance requirements adds layers of difficulty.
  • Operational risks: Concerns about bias, fairness, and disruption loom large.
  • Resource allocation: Many organizations are not investing adequately in responsible AI frameworks.

As Steven Hall, President of Europe and Chief AI Officer at ISG, said, "Everybody understands how transformative AI is going to be and wants strong governance, but the operating model and funding allocated to responsible AI are well below where they need to be.

The Silver Lining: Enterprises Are Taking Action

Advertisment

Despite the challenges, there’s optimism on the horizon. The report notes that most executives plan to increase investments in responsible AI within the next 12 months. This shift in mindset could herald a new era of AI adoption, characterized by trust and accountability.

Key recommendations from HCLTech to bridge the gap include:

  1. Building robust frameworks: Enterprises need clear guidelines for ethics, safety, sustainability, and compliance.
  2. Leveraging tech partnerships: Collaborating with ecosystems to pilot and refine responsible AI initiatives.
  3. Establishing dedicated teams: Centers of Excellence can drive cross-functional efforts to implement AI responsibly.
Advertisment

This includes leading by example by launching an Office of Responsible AI and Governance, focusing on co-innovation and developing intellectual property solutions to align with ethical AI practices.

The High Stakes of Responsible AI

The urgency for responsible AI isn't just about compliance or ethics—it's a competitive advantage. According to the report, AI-driven transformation is moving from proof-of-concept to mainstream adoption, especially in customer service, software development, and marketing. However, the trust gap remains a critical barrier.

Advertisment

AI can be a tremendous force for positive change," says Vijay Guntur, CTO & Head of Ecosystems at HCLTech. "But its full potential can only be realized when it can be trusted.

A Call to Action for the Generative AI Age

As generative AI continues to redefine industries, the need for responsible AI practices has never been more pressing. Organizations that fail to act risk not only operational and reputational damage but also the chance to lead in a rapidly evolving technological landscape.

Advertisment

The question isn’t whether enterprises should adopt responsible AI—it’s whether they can afford not to.

Advertisment