Advertisment

Generative AI has the power to transform industries and revolutionize the way we work and live

how Salesforce is leading the way in responsible Generative AI development. Deepak Pargaonkar, VP of Solution Engineering, shares insights.

author-image
Aanchal Ghatak
New Update
Generative AI


Advertisment

As Generative AI emerges as a transformative force across industries, Salesforce is at the forefront of ensuring its responsible development and deployment. Deepak Pargaonkar, Vice President of Solution Engineering at Salesforce India, unveils their commitment to ethical AI, emphasizing transparency, bias mitigation, and data privacy. With a vision to empower businesses of all sizes, Pargaonkar highlights how Salesforce's Trusted AI Principles pave the way for a future where Generative AI drives innovation without compromising ethical standards.

Generative AI, a powerful force in various industries, raises ethical concerns that demand careful consideration. In this exclusive conversation with Deepak Pargaonkar, Vice President of Solution Engineering at Salesforce India, we delve into how Salesforce approaches the responsible development and deployment of Generative AI solutions. From bias mitigation to data privacy, Pargaonkar sheds light on the ethical guardrails and guidelines that underpin their innovative strides. Excerpts from an interview:

Generative AI has shown tremendous potential in various domains, but it also raises concerns about ethical implications. How does Salesforce approach the responsible development and deployment of Generative AI solutions?

Advertisment

Salesforce takes a responsible and ethical approach to the development and deployment of Generative AI solutions. Our commitment lies in providing safe and accurate AI services to our customers while mitigating potential risks and ethical concerns. Like all of our innovations, we are embedding ethical guardrails and guidance across our products to help customers innovate responsibly. We see tremendous opportunities and challenges emerging in this space, and to ensure responsible development and implementation of generative AI, we’re building on our Trusted AI Principles with a new set of guidelines. Our guidelines for Responsible Generative AI aim to assist users of generative AI in addressing potential challenges responsibly during development.

    Ethics and bias in AI are critical considerations. What steps does Salesforce take to identify and mitigate potential biases in Generative AI models to ensure fairness and inclusivity?

    Advertisment

    Salesforce is bringing trusted generative AI to the enterprise. Our Office of Ethical and Humane Use of Technology is involved in every step of product development and deployment. We’ve created a set of guidelines specific to generative AI based on our Trusted AI Principles, an industry-leading framework to help companies think through how to thoughtfully work with generative AI. Salesforce has always had a multi tenant architecture that ensures our customers have complete control over their data, and customers’ data never mixes. Our generative AI products are no different.

    In the context of Generative AI, data privacy and security become paramount. How does Salesforce prioritize user data protection while leveraging large datasets to train these advanced models?

    We provide several recommendations for enterprises to defend against bias, including ensuring you have consent for the data you’re using, being transparent when content has been created by AI, and creating guardrails that prevent some tasks from being fully automated. Our customers – and organizations in general – are responsible for ensuring they are evaluating the representativeness and bias in their data sets, because the models will be grounded in their own data and documents. We believe the benefits of AI should be accessible to everyone but that every company also needs to have strict policies and strategies in place before implementation in order to ensure they’re developing and using AI safely, accurately, and ethically.

    Advertisment

    Can you share any successful use cases where Generative AI has significantly improved processes or decision-making for Salesforce customers, while ensuring responsible AI practices?

    A global customer, luxury retailer Gucci, at one of our events spoke about testing and using Salesforce’s AI products to enhance its call centre employees’ performance. The call centre service agents were augmented into sales and marketing agents. It gave them capabilities they didn’t have before. The goal is to enhance workers, not make them obsolete - the vision is “human touch powered by technology. Case handling efficiency with Einstein GPT (Service GPT) was 30% higher compared to users not using the technology – a very promising pilot.

    Explainable AI is crucial for building trust and understanding model behaviour. How does Salesforce tackle the explainability challenge in complex Generative AI models?

    Advertisment

    We have a strong presence within the world's largest corporations, connecting with billions of people through diverse channels. Our services cater to industries that significantly influence society in every aspect. To maintain the highest level of trust, it is crucial that all our solutions demonstrate mission-critical reliability. Although we recognize the potential of generative AI, building trust in its capabilities remains a challenging endeavor.  We are helping users validate the accuracy of content created, helping them understand the confidence of the content created, and of course keeping a human-in-the-loop. Through the application of explainable AI, we are looking at citing sources and shedding light on the reasoning behind the generated outputs, such as with chain-of-thought prompts that may lead to the kinds of outputs seen in the media so far.

    In the fast-paced technology landscape, regulations surrounding AI are evolving. How does Salesforce navigate the legal and regulatory landscape to ensure compliance with responsible AI principles?

    In the ever-advancing landscape of AI adoption, establishing trust becomes paramount, and Salesforce places it as a top priority. To ensure responsible AI usage, we support risk-based AI regulation, that differentiates contexts and uses of the technology and assigns responsibilities based on the different roles that various entities play in the AI ecosystem. We remain dedicated to embedding ethical guardrails and providing guidance throughout our product offerings, aligning with our commitment to responsible innovation. Moreover, we are actively building on our Trusted AI Principles, developing a new set of guidelines that specifically address the responsible development and implementation of generative AI. By emphasizing trust, responsible practices, and continuous improvement, we aim to enable our customers to leverage AI technologies while upholding high ethical standards.

    Finally, what is your vision for the future of Generative AI and its impact on Salesforce's products and services, while upholding the highest standards of responsible AI development?

    We believe Generative AI has the potential to change every industry in the future. It has the power to transform the way we live and work in profound ways and will challenge even the most innovative companies for years to come. While today big companies are racing to leverage generative AI, we believe in the future it can also help small- and medium-sized businesses (SMBs) sell smarter and more efficiently, regardless of a company’s resources.  For example, SMBs can use generative AI through our Sales Cloud to streamline the sales process and close more deals faster and with fewer resources.  When it comes to responsible AI development, Salesforce has been working on generative AI for years and our views have remained the same. We’ve created a set of guidelines specific to generative AI based on our Trusted AI Principles, an industry-leading framework to help companies think through how to thoughtfully work with generative AI.

    generative-ai artificial-intelligence ethical-ai
    Advertisment