Target’s Brad Thompson explains the rise of reliable, responsible agentic AI in retail

Brad Thompson explains how Target is scaling agentic AI across service, data, and operations to boost reliability, strengthen guardrails, and elevate guest experience.

author-image
Shrikanth G
New Update
Brad Thompson, SVP, Technology, Target

Brad Thompson, SVP, Technology, Target

Listen to this article
0.75x1x1.5x
00:00/ 00:00

Agentic Artificial Intelligence (AI) is reshaping retail, but few organisations operate it at real production depth and scale. Target is one of them. With millions of guests engaging across stores, supply chains, digital touchpoints, and service channels, the retailer has quietly built one of the sector’s most mature multi-agent architectures. From orchestrated service flows and autonomous decision support, to reliability engineering and Responsible AI (RAI) guardrails spanning the US and India, Target’s approach balances innovation with operational discipline. Much of this work is driven by its Bengaluru Global Capability Centre, home to engineering, data science, and AI teams that ship production-grade systems for the global business.

Advertisment

In this conversation withDataquest, Brad Thompson, Senior Vice President of Technology at Target, discusses real agentic AI use cases, reliability at peak scale, data stewardship, organisational design, and the evolving role of Target in India. Excerpts.

Reliability is the sticking point. How do you make multi-agent workflows dependable at scale during peak moments like holiday rushes, when latency and failure cascades can hit checkout, fulfilment, and support at once?

Reliability starts with getting the fundamentals right. We have built in reliability at the agent control plane, which we think of as the operating system for agents. It enables development, governance, and behaviour definition.

Advertisment

We have added ways to monitor the quality and performance of interactions, both while they happen and after. We also use automated evaluations for agents, with datasets and guardrails that check how they perform across different contact types.

Our multi-agent system works in its own runtime and is hosted as distributed agents. It is easily scalable and supported by a solid governed infrastructure that does not impact any of our other systems. We have strong observability and alerting to continuously monitor our AI agent behaviours.

Where do you draw the line between agent autonomy and human oversight? Describe the guardrails, approval steps, and kill-switches you rely on when an agent makes a borderline decision.

Each agent operates within clearly defined responsibilities and strict, non-negotiable guardrails. When interactions involve higher risks, violate guardrails, or require sensitive handling, control seamlessly transitions to a human. For example, a price match beyond a defined threshold requires human approval; AI agents cannot process it independently. Whether it is handling sensitive return orders or potential fraud, these guardrails ensure the right balance between agent autonomy and human oversight. Our strong Responsible AI (RAI) policy upholds transparency, continuous monitoring, and escalation when necessary, ensuring autonomy is always paired with accountability.

Retail data is fragmented and sensitive. How are you tackling data quality, privacy, and security for agentic AI across edge devices in stores, the cloud, and partner systems in India and the US?

We design our systems with governance and safety tools at the core. We have an enterprise team focused on RAI, data, and solution security that defines frameworks and audits our agents. Based on these frameworks, we follow a multi-layered approach to ensure privacy across the client side, middleware, server side, and storage-level encryptions. At the edge, we maintain a strong, secure system with minimal data retention. Along with that, agents use tools such as content moderation and access control and connect to memory stores in controlled ways. Our evaluation and data quality monitoring processes help maintain data integrity and safety.

For our partners, we enforce the same security and RAI standards, following identical compliance requirements and regular audits. We also place strong emphasis on traceability and auditability, ensuring every agent action can be reviewed and verified. This safeguards responsible data use and delivers consistent guest experiences.

Build versus buy is a moving target. Which parts of your agentic stack are proprietary, which are partner-led, and how do you avoid lock-in while keeping speed of delivery?

Our build-versus-buy approach is guided by value and speed. We build technology internally in areas of established strength and collaborate with partners where their solutions provide differentiated value or help in speed of delivery.

What role does Target in India play in this roadmap? Talk about the talent, org design, and upskilling that let the GCC ship production-grade agents, and what outcomes you expect in the next 12 months.

Our engineers, data scientists, and analysts here in India play a key role in bringing transformative retail solutions to life in partnership with our US teams. They embrace complex business challenges and use immersive tools and resources to gain in-depth insights into the US retail landscape and the consumer mindset.

Equally important is our culture of experimentation and innovation, which drives our teams to look for new approaches to create impact for the business, even from thousands of miles away.

retail