/dq/media/media_files/2025/05/20/pe8CYtXuu7tH9GxakOFN.png)
Today, AI is arguably the most discussed technological development. From conversational agents to explainable models, it seems there’s little AI cannot do. But once again, the question arises—how much is too much?
A recent article in the International Journal of Human–Computer Interaction explores the various phases of AI and its implications for executives.
The role of AI innovation
There is a growing need for frugal AI innovation, particularly in developing countries such as India and Brazil. Context-specific, affordable AI solutions—supported by inclusive, macro-level policies—can address long-standing socio-economic and technical gaps. Yet, this domain remains underrepresented in prevailing international discussions on AI.
The study also contributes to the growing body of literature on AI ethics. It highlights the importance of addressing social and political concerns, the lack of digital access, and the urgent requirement for culturally embedded AI governance strategies in developing nations. These measures would enrich global AI policies and could radically improve how organisations approach challenges such as managerial intuition and the digital divide.
Adopting such frameworks could help shift leadership mindsets—countering psychological inertia and encouraging progressive, forward-thinking supervision. The study emphasises the importance of managerial trust—when managers feel safe expressing uncertainty at work, they are less likely to see AI as a mysterious ‘black box’. This openness is essential to preventing human oversight in high-level decision-making. Emotional factors, such as behavioural biases, often influence trust more than technological failures themselves. Thus, clear ethical frameworks and transparent communication are strategic imperatives for successful AI adoption at the top levels of management.
The potholes
The research also highlights a critical issue: senior executives often overlook the voices of middle and lower management—those primarily responsible for implementing high-level decisions. Many of these individuals lack technical expertise in AI, fear job displacement, and therefore experience a weakened sense of belonging within their organisations.
Key contributors to this disconnect include the absence of structured organisational training, a failure to address cognitive biases (such as anchoring and overconfidence) among non-executive managers, and a general disregard for their vulnerabilities.
To mitigate these challenges, senior leadership must actively listen to these concerns and craft AI-related policies that are culturally relevant and inclusive. A major concern raised in the study is the accountability of AI—particularly when an AI-driven decision has adverse outcomes for the firm. In such cases, the organisation’s financial health and reputation are at stake.
To address this, the study recommends that CEOs develop strategies aligned with collaborative human–AI decision-making models. Emphasis should also be placed on bias-reduction workshops and AI literacy programmes across all managerial levels.
Using AI’s capabilities to strengthen internal structures and support human potential can significantly improve employee morale, creativity, and adaptability—key ingredients for sustainable organisational growth.
The study calls for more longitudinal research to better understand the evolving interplay between AI and leadership. Scholars are encouraged to explore how managerial perspectives on AI implementation shape organisational decision-making, and how the use of AI in strategic planning may inadvertently trigger psychological and identity-based risks among middle and lower management—undermining their self-sufficiency, innovation, and learning capacity.
By Prof Anurag Chaturvedi
Assistant Professor – Strategy, Innovation, Entrepreneurship & CSR
Birla Institute of Management Technology