/dq/media/media_files/2026/02/11/ankor-rai-2026-02-11-16-49-34.png)
Ankor Rai, CEO of Straive
As enterprises accelerate generative and agentic AI adoption, many pilots continue to stall before reaching production scale. In this conversation with DataQuest, Ankor Rai, CEO of Straive, argues that the barrier is rarely model capability. Instead, the real constraints lie in content readiness, governance design, and the operationalisation of human oversight.
Drawing on Straive’s work across regulated sectors such as Life Sciences and BFS, Rai outlines why production-grade AI demands structured knowledge infrastructure, risk-tiered human-in-the-loop systems, and disciplined operating models that treat AI not as a project, but as an enduring business infrastructure.
MIT research suggests most GenAI pilots stall. In your experience at Straive, is this failure a result of the technology itself, or a failure of “content readiness”?
The stalling of GenAI pilots is rarely a technology problem. Models have matured rapidly and are more capable than many enterprises are prepared to use effectively; initiatives break down in content/data and knowledge readiness. Many organisations assume that if information exists, it is usable for GenAI, but enterprise content is often fragmented, inconsistently structured, poorly contextualised, and not governed for machine consumption. During pilots, this gap is less visible because datasets are curated, but scaling exposes the full complexity of enterprise knowledge. Conflicting versions, missing context, outdated material, and unclear ownership reduce performance and erode confidence, not because models are incapable, but because the knowledge they depend on is unreliable at scale. Pilots that progress treat content as infrastructure, investing in structure, metadata, contextual signals, and governance so knowledge remains usable as it evolves. Without that foundation, GenAI struggles to move beyond experimentation.
When discussing the AI last-mile problem, which components, data, governance, or human-in-the-loop, most often break down when an agent moves from a sandbox to a live environment?
The last-mile problem is rarely caused by a single failure. Scale exposes breakdowns across governance, data, and human-in-the-loop, determining whether AI survives real operations. Governance strain appears first; data and human oversight failures become most visible. In sandbox environments, governance is implicit. Teams rely on proximity and judgment. In production, that alignment disappears, ownership and escalation clarity weaken, and systems lose a stable operating context.
Data issues surface quickly as curated pilot datasets give way to continuous inputs from multiple sources with varying quality. Without ownership and monitoring, inconsistencies compound silently, and outcomes are blamed on models rather than operating environments. Human-in-the-loop processes struggle to keep pace with scale. Successful deployments treat HITL as a tiered operating structure with explicit thresholds, roles, and escalation paths. Pilot-style broad review collapses under volume; effective systems route only low-confidence or high-risk outputs for human intervention. Escalation triggers, repeated low-confidence outputs, distribution shifts, model disagreement, or downstream exceptions, prevent over- or under-escalation.
Post go-live, intervention evolves from stabilisation toward selective monitoring and exception handling. Organisations that succeed treat HITL as a dynamic control mechanism. Without this structure, oversight becomes a bottleneck, trust deteriorates, and operational discipline, not model capability, becomes the limiting factor. Mature enterprises treat the last mile as an operational challenge, defining governance early, clarifying data ownership, and engineering HITL as an evolving control system.
Straive claims to have achieved AI quality improvements exceeding 99%. In regulated sectors like Life Sciences or BFS, how do you maintain this level of accuracy without creating a human bottleneck?
For Straive, sustaining high accuracy in regulated environments depends on how human oversight is operationalised, not expanded indiscriminately. The focus is on designing review mechanisms aligned with regulatory risk, domain complexity, and scale.
In Life Sciences and BFS, Straive applies a risk-tiered oversight model: outputs affecting compliance, patient safety, or financial exposure undergo structured human validation, while lower-impact outputs rely on automated checks and statistical sampling. This maintains accuracy without creating a human bottleneck.
Human intervention strengthens the system over time by surfacing recurring patterns and edge cases that refine workflows and reduce future intervention. Production governance reinforces this through clear quality thresholds, validation rules, and escalation paths, ensuring review effort remains targeted and proportional to risk rather than transaction volume.
Many enterprises underestimate the operational foundations. What does a “production-grade” human-in-the-loop model look like in 2026 compared to traditional BPO models?
A production-grade human-in-the-loop model in 2026 differs fundamentally from traditional BPO structures because its purpose is different. Traditional BPO models optimise volume efficiency, while modern human-in-the-loop models preserve judgment, manage risk, and continuously improve intelligent systems.
In traditional BPOs, humans execute predefined tasks repeatedly, with value measured through throughput, cost per transaction, and turnaround time. In production-grade AI systems, humans intervene selectively to resolve ambiguity, validate high-impact decisions, and handle exceptions requiring context or regulatory judgment, with value measured through outcome quality, system reliability, and reduced future intervention.
Learning compounds over time as every intervention is captured and fed back into the system, reducing repeated manual review. Operationally, human-in-the-loop teams function within defined governance frameworks, with explicit thresholds, escalation paths, and direct integration into production workflows to ensure consistency at scale.
In short, a production-grade human-in-the-loop model is not an extension of BPO but an operating capability combining domain expertise, governance, and system learning to support intelligent systems reliably.
Knowledge infrastructure is increasingly viewed as critical as cloud platforms. How should CTOs rethink content architecture to make it truly AI-ready?
CTOs need to move away from viewing content architecture as a storage or access problem and treat it as a knowledge system that actively supports decision-making. AI-ready content is defined not by volume, but by how reliably information can be understood, trusted, and applied by machines in real workflows.
The first shift is from monolithic repositories to modular knowledge structures, where content is broken into reusable units that enable precise retrieval rather than approximate answers. The second shift is towards context and metadata. AI systems require signals about relevance, recency, regulatory status, ownership, and intended use; without consistent metadata and version control, even accurate information can produce incorrect outcomes.
Governance is the third pillar. Content must evolve alongside the business, with mechanisms to validate, update, and retire knowledge so systems do not rely on outdated or conflicting information. Content architecture must also be tightly linked to workflows, ensuring knowledge supports decisions with greater reliability and accountability. In effect, CTOs must treat knowledge infrastructure as living infrastructure, maintained with the same discipline as cloud platforms and core systems; without this shift, even advanced AI capabilities struggle to deliver consistent value.
Enterprises often overinvest in the “brain” (the model) and underinvest in the “fuel” (the data). Where are you seeing the most significant gaps today?
The most significant gaps are not in data volume, but in how effectively data can be used in real workflows. Many enterprises have accumulated large datasets, yet much remains fragmented, inconsistent, or difficult to apply reliably once AI systems move beyond pilots.
A common issue is inconsistency across systems, where the same entities are defined differently, creating conflicting signals for models at scale. Data also changes faster than the processes used to maintain it, causing models to rely on outdated or misaligned inputs over time. These gaps reduce confidence in outcomes, particularly in high-stakes or regulated environments.
Enterprises that close this gap shift from acquiring more data to improving usability, consistency, and ongoing maintenance, a shift that, more than model sophistication, determines whether AI delivers sustained value in production.
India is transitioning from a back-office hub to a core capability centre for high-trust AI. How is Straive leveraging its Indian centres to manage governed, scalable AI workflows for global clients?
India’s role in global AI delivery has evolved, and Straive’s operating model reflects this shift. Indian centres now function as core capability centres supporting the full AI lifecycle, combining data preparation, analytics, and AI workflow design with domain expertise to enable high-trust AI at scale.
These teams contribute by structuring and enriching data, supporting analytics, and designing production-ready AI workflows, while ensuring reliability through output validation, exception handling, and quality support in regulated environments.
Governance and human oversight are embedded end-to-end, with review frameworks, escalation mechanisms, and quality thresholds enabling AI systems to scale globally with consistency, accountability, and trust.
You’ve seen transformation programs generate up to 8x ROI. Can you share a pattern or “aha” moment where a client shifted from a cost sink to high-return outcomes?
At Straive, the turning point occurs when clients stop treating AI as a supporting layer and start redesigning workflows around it. Early programs often bolt AI onto existing processes, increasing complexity and cost without materially changing outcomes. The shift happens when organisations re-architect how work gets done, clearly defining where AI owns decisions, where humans intervene, and how outcomes are measured. AI is then evaluated on business results such as risk reduction, throughput quality, or revenue impact, converting it from a cost sink into a high-return capability.
For example, a global payments provider facing coordinated fraud implemented a graph engine using Breadth-First Search (BFS) to identify multi-hop linkages and hidden fraud clusters, achieving a 27% reduction in fraud losses and approximately $3.5M in annual savings while strengthening real-time decision-making. In research and publishing, embedding AI into communication workflows, prioritisation, and multilingual support increased collection rates by 23–25% and reduced delinquency by 20–25%, improving agent productivity and cash flow.
Agentic AI has also created value in regulated industries. In pharma, multi-agent workflows evaluating clinical trial feedback improved signal reliability and shortened pharmacovigilance review cycles. A major US league unified fragmented fan data to enable personalisation at scale and improve engagement and retention. In logistics and supply chain, agentic AI converts information from multiple document formats into standardised insights, reducing manual effort and improving vendor decision-making.
As you lead Straive’s transition into a technology-driven partner, what is one industry reality about AI that you wish more CXOs would acknowledge today?
AI success depends more on operating discipline than technological sophistication. While technology has advanced rapidly, many enterprises remain structured to deliver projects rather than run AI as an ongoing business capability.
The gap emerges after deployment, when go-live is treated as the finish line instead of the start of sustained ownership. AI systems require clear accountability, defined decision boundaries, and structured interaction between human judgment and automation to maintain effectiveness and trust.
Organisations that see durable value treat AI as long-term operational infrastructure, designing operating models where governance, workflows, and human oversight evolve alongside the technology. Once CXOs recognise AI as a long-term operational infrastructure rather than a one-time initiative, the quality of outcomes and returns improves significantly.
/dq/media/agency_attachments/UPxQAOdkwhCk8EYzqyvs.png)
Follow Us