Exploring the Generative AI Landscape with Nagarro's Anurag Sahay

Aanchal Ghatak
New Update
generative ai

In today's rapidly evolving technological landscape, generative AI is making significant strides across various sectors. Its ability to provide conversational experiences, semantic search capabilities, and advanced reasoning is revolutionizing how businesses operate and interact with their customers. In this interview with Dataquest, Anurag Sahay, Managing Director – AI and Data Science at Nagarro, shares real-world examples of generative AI applications, discusses its future potential, and explores how organizations can navigate its integration responsibly and ethically. Excerpts:


Can you share examples of how generative AI is utilized in real-world scenarios across different sectors? How do you envision its future applications evolving?

Generative AI leverages three critical attributes: conversational experience, semantic search capabilities, and reasoning abilities. These attributes continuously evolve with new advancements, enabling LLMs to process increasingly vast amounts of data and enhance their understanding of content and preferences. Ultimately, these developments aim to provide more human-like interactions with AI, blurring the lines between human and AI engagement in the future.

The two most extensively used Gen AI applications across industries are personalization and customer experience. These involve interactions where employees and customers engage with generative AI, which analyzes this data to understand individual preferences and provide tailored information aligned with those preferences. 


Another area where generative AI is widely utilized is supply chain optimization. Many companies rely on forecasting and inventory management in their supply chains, which involves analyzing large amounts of data. GenAI's conversational interface simplifies this process, translating complex analytics into a more understandable format. Quality control is another area where generative AI is frequently employed across different industries.

While these trends are prevalent, there are variations based on industry-specific requirements. For example, in financial services, generative AI enables personalization and seamless customer experiences for enhanced convenience and real-time information access. Similarly, it enables government and public sectors to focus on real-time data analysis and automation. In healthcare, generative AI ensures personalization and customer experience and prioritizes supply chain management in retail and CPG industries.

Considering the rapid pace and advancements in AI, what do you see as the future landscape of generative AI development? How can organizations navigate and integrate into the broader automation landscape? 


If we examine the three abilities of generative AI—conversation, meaning, and reasoning—we anticipate significant modifications to all human-machine interactions. For instance, our current method of interacting with computers via keyboards will undoubtedly evolve. 

Instead, we'll likely communicate with a virtual agent using natural language, spoken or written. Tasks requiring engaging with a command will become more seamless once AI agents are developed to facilitate these interactions.

Today, conducting searches often involves determining the most appropriate keywords. However, search capabilities are expected to become even more robust in the future, significantly reducing the cognitive effort required to conduct searches. 


This transformation envisions a future where human-machine interactions are interactive and intuitive. Agents comprehend our intentions, automating tasks like coding, creating documentation, and composing emails while capturing emotions and sentiments accurately.

How do you prioritize responsible and ethical AI practices in your work? How can the industry ensure privacy and intellectual property rights and maintain factual accuracy while leveraging generative AI technologies?

Like any technology, it's crucial to establish guardrails and governance policies to prevent misuse. While individuals bear responsibility, organizations and government bodies play a pivotal role in implementing these measures. Strict laws and higher penalties can help deter malicious AI uses and activities. It is vital to build an ecosystem with a clear understanding of AI's capabilities and limitations. This encompasses privacy protection, intellectual property preservation, and addressing questions about IP rights in AI usage. Industry observation can guide the development of effective strategies based on best practices. It is critical that all organizations proactively implement guidelines and policies within organizations to steer AI development in the right direction. 


How do you mitigate biases in Gen AI models? What steps can organizations take to ensure fairness and inclusivity in developing and deploying these technologies?

A paramount area to address bias is the datasets used to train AI models or provide context. These datasets should reflect diversity and fairness, requiring a deep understanding of these concepts. For instance, in facial recognition, it's essential to include diverse faces for accurate recognition. AI teams are responsible for designing datasets fairly and inclusively.

Similarly, during model training, organizations must prevent biases from favoring one data type over another. Trainers need to possess a keen awareness of biases. When deploying models, continuous monitoring and evaluation are necessary to assess outcomes in real-world scenarios. As the world evolves, ongoing efforts are needed to address biases and ensure fairness and inclusivity in deployed models. This ongoing commitment is essential for organizations to effectively uphold fairness and inclusivity standards.


Transparency and explainability are crucial for building trust in AI systems. How do you address the need for transparency and explainability in Generative AI algorithms, especially considering their complex nature?

When constructing AI models, mathematical approaches are employed to discern the key features the model uses in making predictions. For instance, in standard AI models, the model makes decisions based on features like age and salary. Analyzing the output of these features allows for an understanding of their contribution to the decision-making process. This analysis is relatively straightforward with models like decision trees and linear models. 

However, as we delve into neural networks, the complexity increases significantly due to the many nodes involved, reaching into the billions and potentially trillions in the future. Research is underway to develop methods for assessing feature importance and its correlation with a neural network output. Complex mathematical approaches are being explored to address this challenge. 


While this area of computer science may lag behind the advancements in neural network design, it remains a focal point for development. Efforts are ongoing to enhance the transparency and explainability of these complex models.

Companies like OpenAI incorporate explainability features into their models to improve decision-making and mitigate risks like malicious queries. This trend is expected to continue, with advancements to achieve even greater transparency and interpretability in AI models.

How do you incorporate a human-in-the-loop approach in developing and deploying Generative AI systems? What role do human oversight and intervention play in ensuring ethical outcomes?

In generative AI, humans are primarily involved through RHF or reinforcement learning with feedback. When an AI model generates a prediction, human evaluators assess and rate it, providing feedback to the model for learning and improvement. This method is instrumental in integrating human input into model development and refinement. However, in some cases, AI models are replacing human feedback due to their superior performance in certain tasks.

The sophistication of these feedback mechanisms is expected to increase, encompassing human evaluators and AI models trained for specific purposes. Some companies, like Anthropic, are exploring innovative approaches to utilizing AI agents for feedback in model development. While human involvement remains essential, there are aspects where AI agents can provide more effective feedback. For instance, the challenge with RLF lies in its timing, as feedback occurs after the AI model has already learned behaviors. Ideally, feedback should be provided earlier to guide the AI model's learning.

Early intervention is crucial in scenarios where ethical considerations are paramount. For instance, if an AI model is asked to provide instructions on creating a harmful device, the AI's ethical framework and past feedback should prompt it to refrain from providing such information. By sensing the intent behind a conversation early on, AI models can proactively steer discussions away from harmful topics. In summary, while RHF plays a vital role in model refinement, efforts are underway to ground AI thinking earlier in the learning process, ensuring that feedback guides the AI model's development from the outset.

With the proliferation of data-driven technologies, how do you prioritize data privacy and security when working with Generative AI? What measures do you recommend for safeguarding sensitive information and preventing potential misuse?

The most crucial aspect of prioritization is how data is prepared. Anonymization is key; for instance, replacing personal names with generic identifiers or IDs ensures the absence of identifiable information. During data preparation, organizations must ensure that any privacy-specific information is removed from the dataset.

For instance, training facial recognition models on synthetic data – faces that don't exist. This synthetic helps train the model effectively without compromising privacy and security. Once the model is established, access control becomes paramount. Governance plays a pivotal role, with all AI assets managed by a central governance framework within the organization. Unauthorized access must be prevented to maintain data integrity. 

While controlling access to data tables and fields is relatively straightforward, managing access to AI models poses greater challenges. Nevertheless, companies are actively developing solutions to address this issue. Thus, adopting an end-to-end approach to AI, with a focus on privacy and security from the outset to the conclusion of the workflow, becomes increasingly vital.

Collaboration among stakeholders and adherence to industry standards are essential for promoting responsible AI practices. How do you advocate for collaboration and the establishment of ethical guidelines within the AI community?

The prevailing drive in the AI landscape is competition, prioritizing victory in the AI race over fostering collaboration. This perspective highlights a concerning trend: our world isn't naturally inclined towards collaboration. Consequently, the effectiveness of collaborative efforts is still being determined. 

There needs to be more collaboration among companies like OpenAI, Anthropic, and others. While research institutes and universities have shown better collaboration, the exorbitant costs have put universities on the sidelines, leaving large corporations like OpenAI, Microsoft, Meta, and others in control.

However, there's a glimmer of hope in the potential for companies to establish industry standards around governance. If a company like Anthropic develops a robust governance framework that garners widespread approval, others may follow suit. This prospect offers hope for improved collaboration among both individuals and companies. 

How can we enhance public understanding of Generative AI technologies and their ethical implications? What initiatives do you support to educate both professionals and the public about responsible AI usage?

This trend, primarily driven by career aspirations, is notably prominent in India, where individuals proactively enhance their skills. They engage in training programs focused on the latest technologies to bolster their job prospects and salary negotiations. 

Consequently, an abundance of educational content has emerged across various platforms. Moreover, renowned organizations like Microsoft and Google have assumed a greater role in training and upskilling individuals, offering extensive courses and resources on generative AI publicly and free of charge. This multifaceted approach reflects the dynamic nature of advancements in this field.

From Nagarro's standpoint, we are actively prioritizing the upskilling of our workforce in this domain. We have launched comprehensive programs tailored for executives, business analysts, and engineers. These initiatives encompass a wide range of content and resources. Our organization consistently emphasizes the importance of generative AI proficiency, encouraging and motivating our employees to pursue training opportunities. Nagarro remains deeply committed to fostering awareness and education within our workforce.

What are your thoughts on the current regulatory landscape surrounding AI, particularly about Generative AI? How can policymakers and regulatory bodies adapt to the rapid pace of AI innovation while ensuring ethical standards are upheld?

We see that many people don't fully understand AI and Gen AI and the risks involved. We observe two extremes: some view AI as a dystopian technology that will lead to global doom. In contrast, others believe it is entirely beneficial and fail to acknowledge potential negative consequences. The truth lies somewhere in between. 

We believe that educating policymakers and senior leaders is key to developing an appropriate regulatory landscape, as they hold significant ownership and responsibility in shaping this landscape.