/dq/media/media_files/2025/02/12/ZUuPD05dAGllhQH5COxz.png)
Sudhakar Singh, Chief AI Security Officer, SAP
AI is transforming many industries, and it is equally vital to ensure trust, transparency, and accountability. Systems can achieve these ethical requirements through the proper design – “Trust by Design.” The framework does not just bias, explainability, and compliance but creates AI that is powerful and responsible. Industries dealing with healthcare, finance, and retail become even more reliant on AI for decision making, hence trust must be instilled at every level of development.
As AI becomes deeply integrated into critical industries, trust isn’t a luxury—it’s a necessity. In this interview, Sudhakar Singh, Chief AI Security Officer at SAP, explains how "Trust by Design" ensures AI systems are transparent, secure, and ethically sound from inception, addressing concerns like bias, accountability, and regulatory compliance.
How do you conceptualize "Trust by Design" within the context of AI development at SAP Labs India?
"Trust by Design" is at the heart of our Responsible AI approach, a fundamental pillar of our Business AI strategy. It is a holistic approach encompassing not only the AI technology itself but also the entire ecosystem around it. This involves implementing safeguards throughout the AI lifecycle – from design and development to operations and user education – to ensure safe, ethical, and reliable AI solutions.
We proactively anticipate potential risks, prevent misuse, and continuously assess our AI systems against evolving ethical considerations, security challenges, and regulatory frameworks. Responsible AI is a commitment woven into every stage of our AI development process, ensuring that trust is not an afterthought but a design principle.
What are the fundamental principles and components that constitute this approach?
SAP's Responsible AI framework is built on three interconnected pillars: AI ethics, AI security, and AI regulatory compliance.
AI ethics define the 'what' – the guiding principles for responsible AI development and deployment. AI security focuses on the 'how' – the practical controls and safeguards to protect against misuse and vulnerabilities. AI regulatory compliance ensures the 'proof' – demonstrating adherence to legal and industry standards.
This integrated approach, implemented from the inception of any AI use case, ensures trust is embedded from day one, mitigating risks related to bias, transparency, and accountability, particularly in high-stakes scenarios like healthcare and finance.
Could you share specific frameworks or methodologies you employ to embed trust throughout the AI lifecycle?
Our AI ethics framework is rooted in UNESCO’s Recommendation on the Ethics of Artificial Intelligence, encompassing principles like privacy, fairness, sustainability, human oversight, accountability, transparency, and user education. We combine this framework with robust cloud security principles throughout our AI lifecycle, from initial design and model benchmarking to deployment validation and ongoing monitoring for model drift and system health.
This holistic approach ensures "Trust by Design" and allows for continuous improvement based on user feedback and evolving industry best practices.
Which industries are leading in adopting "Trust by Design," and what lessons can others learn from them? How does trust in AI systems vary across industries like healthcare, finance, or retail?
While "Trust by Design" is essential across all industries, software companies, being at the forefront of AI innovation, often prioritize Responsible AI as a critical business requirement. We introduced its Global AI Ethics Guidelines in 2018, which now govern all aspects of our AI development and deployment A key takeaway is that while AI holds immense potential, it demands careful oversight and continuous user education, especially in high-risk sectors like healthcare and finance.
Given AI is stochastic, the potential for errors must be acknowledged and mitigated, particularly in scenarios with significant consequences. Ultimately, AI should be seen as a powerful advisor rather than an autonomous decision-maker. Its recommendations must always be vetted, contextualized, and complemented by human expertise to ensure trust, reliability, and accountability.
Are there particular stages in AI development where integrating trust is most challenging? How do you address these challenges?
While data quality is crucial, the most significant challenges arise when deploying AI in high-risk situations where errors can have severe repercussions. In industries like healthcare and finance, explainability and transparency become critical to ensuring AI-driven decisions are reliable and accountable. We address this by ensuring our business applications offer transparency into the AI's decision-making process.
We provide users with visibility into underlying data sources, reasoning logs, and key decision-making factors, enabling them to understand, validate, and even correct AI-driven outputs. This transparency promotes trust and facilitates continuous improvement based on user feedback.
How can organizations measure and showcase trust in their AI systems to stakeholders? Are there established metrics or benchmarks for this?
Measuring AI trustworthiness is a continuous process that spans the entire AI lifecycle. This includes model benchmarking before deployment, security and ethical risk assessments during development, validation before production release, and ongoing monitoring of system health and model drift post-deployment.
While frameworks like MLCommons provide functional and safety metrics, the field is rapidly evolving, with security and privacy KPIs becoming integral to broader benchmarking standards.
At SAP, we take a comprehensive approach, combining these methodologies to ensure our AI systems meet rigorous standards for reliability, security, and ethical performance, reinforcing stakeholder confidence in our technology.
What proactive measures are in place to adapt to new regulations, and how do they influence your AI development processes?
SAP's Global AI Ethics policy acts as our primary checkpoint, ensuring all products and applications comply with relevant regulations. We have a rigorous review process for all AI development, particularly for high-risk use cases that undergo additional scrutiny by our ethics committee.
In some cases, AI applications are modified to reduce risk, while others may be discontinued altogether if they pose unacceptable risks to data integrity or customer trust.
For instance, to mitigate bias in recruiting applications, we implement architectural safeguards, rigorous bias testing, and continuous monitoring to uphold fairness and transparency. By proactively embedding compliance, ethics, and accountability into our AI lifecycle, we ensure that regulatory shifts strengthen our commitment to responsible AI development.
Given the phenomenon of algorithm aversion, where users may distrust AI systems after observing errors, how does SAP work to build and maintain user trust? What strategies are employed to ensure transparency and reliability in AI outputs to mitigate such aversion?
Building trust in AI, especially in the face of algorithm aversion, requires transparency, explainability, and continuous user engagement. Users are more likely to distrust AI if they don’t understand how it arrived at a decision—especially when errors occur. We address this by ensuring our AI-powered applications provide clear insights into data sources, the reasoning process, and contributing factors behind AI-driven outputs.
This explainability empowers users to understand, validate, and provide feedback, fostering a cycle of continuous learning and improvement.
Looking ahead to 2025 and beyond, what steps is SAP taking to ensure that its AI strategies remain robust and trustworthy in the face of rapid technological and regulatory changes?
Looking ahead, SAP is committed to staying ahead of technological and regulatory changes through a dynamic, adaptable AI strategy. Our approach ensures that all SAP products harness the power of AI technologies while remaining agile enough to incorporate industry advancements.
Given that our products are already aligned with diverse regulatory frameworks, we can typically integrate emerging AI regulations with incremental adjustments rather than major overhauls. We are dedicated to continuously refining our technical, ethical, and usage policies to meet evolving standards, ensuring that our AI solutions remain not only cutting-edge but also robust and trustworthy.
How do you foresee the concept of "Trust by Design" evolving in the coming years?
We're already witnessing the rise of AI systems overseeing other AI systems, similar to the specialization and regulation within human societies. We’re moving towards sophisticated AI agentic frameworks, where certain AI agents will act as real-time auditors, identifying and correcting biases, anomalies, and unintended behaviors in other AI models. This will not only enhance transparency and accountability but also instill greater confidence in AI-driven decisions, paving the way for more responsible.