Advertisment

Navigating Krutrim's quandary: Addressing odd AI behavior and ethical concerns

Delving deeper into the potential reasons why Ola's Krutrim might be behaving oddly and how its indigenous focus could contribute to any biases can be attributed to various reasons

author-image
DQINDIA Online
New Update
artificial intelligence

Krutrim

The recent emergence of generative AI models such as Ola's Krutrim and Google's Gemini has sparked discussions regarding their potential impact on the market, particularly in comparison to established models like ChatGPT. However, amidst this excitement, it's crucial to assess the potential negative repercussions associated with the current issues faced by these advanced AI systems. Gaurav Sahay, Practice Head (Technology & General Corporate), Fox Mandal & Associates sheds more light on Ola Krutrim’s impact and how this will impact ChatGPT’s hold on the market.

Advertisment

Gaurav Sahay

DQ: Why is Ola’s Krutrim behaving oddly? Is it because it's in the early development stage or could its indigenous focus be a contributing factor to its biases?

Gaurav Sahay: Ola’s Krutrim, heralded as "India's premier full-stack AI solution," recently debuted its generative AI chatbot in public beta. However, the rollout has been tarnished by incidents where the chatbot erroneously attributed its development to OpenAI. Users raised concerns after encountering instances where the chatbot credited OpenAI as its creator during interactions. The Krutrim team acknowledged the issue, attributing it to a data leakage problem originating from one of the open-source datasets utilized in the Language Model (LLM) fine-tuning phase.

Advertisment

Ola's Krutrim, being an AI system, could exhibit odd behaviour for a variety of reasons. In the early stages of development, AI systems often encounter issues related to incomplete training data, algorithmic imperfections, or unexpected interactions with users or the environment. Additionally, biases in AI systems can arise from various sources, including the data used to train them and the design decisions made during their development. If Ola's Krutrim has a strong focus on indigenous contexts or datasets, it's possible that biases related to those contexts could manifest in its behaviour.

Delving deeper into the potential reasons why Ola's Krutrim might be behaving oddly and how its indigenous focus could contribute to any biases can be attributed to: (a) Early Development Stage Issues, wherein, AI systems often encounter various challenges that includes, (i) insufficient or unrepresentative data that leads to gaps in the AI's understanding, causing it to make incorrect or unexpected decisions, (ii) the algorithms used in AI systems may not yet be fully optimized or may contain bugs that result in unexpected behaviour, (c) unforeseen interactions, where users might engage with the AI in ways that developers didn't anticipate, leading to unexpected outcomes. (b) Biases in Indigenous Contexts, i.e., if Ola's Krutrim has a specific focus on indigenous contexts, there are several ways biases could arise, which could be because of, (i) the datasets used to train the AI may not adequately represent the diversity of indigenous cultures, leading to biases in its understanding and decision-making. (ii) the said indigenous cultures may have unique customs, languages, or social norms that the AI may not fully comprehend, potentially leading to misunderstandings or inappropriate responses and (iii) the teams developing Ola's Krutrim themselves lack diversity, they may unintentionally overlook or misunderstand important cultural nuances, leading to biased outcomes.

To address these issues, developers typically employ techniques such as bias detection and mitigation, diverse training data sources, and rigorous testing. It's essential for developers to continuously monitor and refine AI systems like Krutrim to ensure they behave ethically and accurately across various contexts. To mitigate biases and address odd behaviour in AI systems like Krutrim, developers can employ (a) Bias detection and mitigation, where the developers must use techniques such as fairness testing and algorithmic audits to identify and address biases in the AI system, (b) Diverse training data, including diverse and representative datasets, including those from indigenous communities, can help reduce biases and improve the AI's understanding of different cultural contexts, (c) Ethical guidelines and standards wherein developers can adhere to ethical guidelines and standards for AI development, ensuring that the system behaves responsibly and respects the rights and values of all users, (d) Ongoing monitoring and refinement of the AI system are crucial to identify and address any issues that arise over time, including biases and odd behaviour.

Advertisment

By adopting these strategies, developers can work towards creating AI systems like Krutrim that are more accurate, fair, and respectful of diverse cultural contexts. This may involve investing in improved algorithms, expanding and diversifying training data, enhancing quality control processes, and soliciting feedback from users to identify and address pain points. By continually refining and improving the performance of Ola's Krutrim, the company can bolster user trust, enhance the user experience, protect its reputation, and maintain a competitive position in the market.

DQ: Could the hallucinations/ information gap hinder the company’s long-term proposition?

Gaurav Sahay: Hallucinations or information gaps in an AI system like Ola's Krutrim can indeed pose significant challenges to the company's long-term proposition. If users experience hallucinations or encounter significant information gaps when interacting with Ola's Krutrim, they may lose trust in the system. Trust is crucial for the widespread adoption and long-term success of AI technologies. If users perceive the system as unreliable or unpredictable, they may be hesitant to use it, which could hinder the company's growth and sustainability. Hallucinations or information gaps can lead to a poor user experience. Users expect AI systems to provide accurate and helpful responses to their queries or requests. If Ola's Krutrim consistently fails to meet these expectations, users may become frustrated and seek alternative solutions, potentially leading to a loss of customers for the company. Persistent issues with hallucinations or information gaps could damage Ola's reputation as a provider of AI-powered services. Negative word-of-mouth from dissatisfied users or negative media coverage could harm the company's brand image and make it more challenging to attract new customers or partnerships in the future.

Advertisment

 

In today's competitive landscape, companies must differentiate themselves by offering innovative and reliable products or services. If Ola's Krutrim struggles with hallucinations or information gaps while competitors' AI systems perform more effectively, Ola may fall behind in the market and lose its competitive edge.

DQ: Will new models like Krutrim, or even Google’s Gemini, break ChatGPT’s current hold on the market?

Advertisment

Gaurav Sahay: The emergence of new AI models like Ola's Krutrim and Google's Gemini certainly has the potential to impact the market landscape, including ChatGPT's position. New AI models bring innovation and competition to the market, encouraging existing players like ChatGPT to continually improve and innovate. Models like Krutrim and Gemini may offer specialized capabilities or cater to specific use cases or industries. Depending on their unique features and strengths, they could attract users who require those particular functionalities, potentially leading to a shift in market share. The AI market is diverse, with various segments and niches. While ChatGPT currently holds a significant share of the market, new models like Krutrim and Gemini could carve out their own niches or target specific user demographics, thereby diversifying the market and offering alternatives to users. Established brands like Google may have an advantage in terms of brand recognition and trust, which could influence users' decisions when choosing between different AI models. However, newer models can still gain traction by offering unique value propositions or addressing specific pain points. User preferences and requirements evolve over time, driven by factors such as technological advancements, changing market dynamics, and shifting societal needs. As new AI models enter the market, they have the opportunity to align more closely with evolving user preferences and capture market share accordingly.

Overall, while new models like Krutrim and Gemini pose competition to existing players like ChatGPT, they also contribute to the overall growth and diversification of the AI market. Success in this dynamic landscape largely depends on factors such as innovation, differentiation, user adoption, and ongoing adaptation to changing market dynamics.

DQ: What are the negative repercussions that could emerge from the current issues being faced by these gen AI models?

Advertisment

Gaurav Sahay: The current issues faced by generative AI models like ChatGPT, Krutrim, and Gemini can have several negative repercussions. If these AI models exhibit biased behaviour, generate inappropriate content, or fail to provide accurate information, users may lose trust in the technology. This loss of trust can hinder adoption and usage of AI-powered services, impacting the companies behind these models and the broader AI industry. Issues such as biased outputs, misinformation generation, or harmful content creation raise significant ethical concerns. These concerns can lead to public backlash, regulatory scrutiny, and calls for stricter oversight of AI technologies. Failure to address ethical issues can damage the reputation of companies developing these models and lead to legal and regulatory consequences. Biased or polarizing content generated by AI models can exacerbate social division and polarization. If these models amplify existing biases or spread misinformation, they can contribute to societal discord, undermine democratic processes, and foster distrust among different groups. Biases in AI models can disproportionately affect marginalized communities, exacerbating existing inequalities and discrimination. If these models are not adequately trained on diverse and representative data or fail to account for the needs of all users, they can perpetuate systemic biases and further marginalize vulnerable populations. Negative publicity surrounding AI models can lead to affecting investor confidence, stock prices, and overall market perception. Companies developing these models may face financial losses, reduced market valuation, and challenges in attracting investment and partnerships.

Continued issues with AI models could prompt governments and regulatory bodies to intervene with stricter regulations and oversight. Increased regulation can impose compliance burdens on companies, limit innovation, and stifle growth in the AI industry. Concerns about the negative consequences of AI models may lead companies to adopt a more cautious approach to innovation, fearing reputational damage or regulatory backlash. This could stifle creativity and slow down the development of AI technologies with potentially beneficial applications.

These issues require a concerted effort from stakeholders, including AI developers, researchers, policymakers, and civil society organizations. It's essential to prioritize ethical considerations, promote transparency and accountability, and ensure that AI technologies serve the best interests of society as a whole.

Advertisment