AI in healthcare is the pursuit of insights, not perfection: CitiusTech

John Edwards, Senior VP of CitiusTech shares insights on using AI in healthcare without perfect data, highlighting Agentic AI, data fragmentation, trust frameworks, and evolving regulations.

author-image
Punam Singh
New Update
John Edward, CitiusTech
Listen to this article
0.75x 1x 1.5x
00:00 / 00:00

In a rapidly evolving healthcare landscape, where data-driven decision-making is becoming increasingly crucial, John Edwards, Senior Vice President at CitiusTech, sheds light on the current challenges and opportunities in AI-driven healthcare solutions. In an exclusive interview, Edwards delves into the flaws of assuming perfect datasets are necessary for AI models, discusses the growing role of Agentic AI, and offers practical advice for healthcare organisations navigating data fragmentation and regulatory shifts. His insights provide a glimpse into how AI can transform healthcare, even when data is far from perfect.

Advertisment

Why is the assumption of needing a perfect dataset flawed, particularly in the context of healthcare data integration? What are some common misconceptions healthcare organisations hold regarding end-to-end data integration?

This assumption is a common challenge in data-driven environments. A key observation from my career is that organisations often request more data once they begin deriving insights. My professional background, which includes data warehousing and training as a statistician, has taught me that the pursuit of perfect data can be counterproductive. As healthcare organisations transition to becoming data-driven, the focus should shift from collecting and cleansing vast amounts of data to using existing data through predefined algorithms to generate actionable insights.

Advertisment

In AI solutions, not every data point is necessary. What's important is having the right data to generate relevant, timely insights. Identifying core data that produces meaningful outcomes is crucial, especially when dealing with missing data points. We use techniques such as strategic data substitution—something I was trained in during my statistical studies. AI systems can now perform this function even more effectively than traditional statistical methods.

In healthcare, clinicians tend to trust only data from familiar devices or processes that have met established standards. Data from external or unknown sources is often met with scepticism. To leverage the expanding universe of healthcare data, organisations must adapt by learning to work with data produced beyond their immediate systems.

To aid this transition, we have developed a Data Quality Trust Framework. This framework helps clients identify suitable data for AI models, ensuring that the models generate traceable, consistent outputs. Transparency is key to this process. Although users might not understand the inner workings of AI models, like large language models or statistical methods, they need to trust how decisions are made. Ultimately, we must understand that data will never be perfect, yet humans constantly make decisions with incomplete information. The goal for organisations should be to make confident, informed decisions based on imperfect data.

Advertisment

How can AI models be trained or adapted to function effectively in fragmented or incomplete data environments, particularly when working with legacy EMRs and siloed systems?

This is a critical issue, especially in clinical environments with legacy systems that lack interoperability. Despite the evolution of Electronic Medical Records (EMRs), the data they contain remains incomplete. EMRs generally focus on immediate clinical needs and often omit key aspects like genetic information, environmental conditions, and family health history, which significantly influence health outcomes.

When working with fragmented data, cohort analysis becomes invaluable. By grouping individuals with similar characteristics, we can identify patterns even when individual datasets are incomplete. Advanced statistical techniques, such as regression modelling, predict missing data based on existing information, creating reliable, representative models.

Advertisment

AI enhances this process by using more sophisticated imputation methods, though risks like “hallucinations” (where the AI generates inaccurate or non-existent data) can arise. Therefore, maintaining human oversight is essential to ensure the transparency and accuracy of AI-generated outputs.

We are also seeing an increasing interest in Agentic AI, which allows for some autonomy within AI systems. In healthcare, Agentic AI will initially be applied in non-clinical settings, where the consequences of decisions are less critical. As trust in these systems builds, AI’s role will gradually extend into clinical contexts, though always under clinician supervision. Over time, AI will help improve decision-making efficiency, but its role will complement, not replace, human judgement.

What role does an agentic system play in enabling AI to adapt in real-time? How do you see the shift from deterministic to agentic frameworks reshaping the AI deployment landscape, particularly in the healthcare sector, over the next three to five years?

Advertisment

Agentic AI represents one of the most promising developments in AI. It allows for learning models that continuously assess the outcomes of previous decisions, refining their recommendations over time. While Bayesian statistics attempted a similar approach by updating probability models, Agentic AI takes this concept much further.

For example, in prior authorisation processes, Agentic AI can predict the likelihood of approval based on clinical guidelines and historical data. If a model predicts that a case has a 90% chance of approval, it could suggest automatic approval, but with human oversight. The model learns from the decision, whether it’s affirmed or denied, continuously refining itself based on real-world feedback.

Although AI will not take over complex clinical decisions immediately, it can significantly enhance clinician efficiency by automating routine tasks. For example, it can help clinicians navigate extensive clinical records, identify key decision points, and offer recommendations. Over time, as AI gains trust and clinicians adapt, it will play an even greater role in clinical decision-making.

Advertisment

Another evolving trend in healthcare is the concept of “gold carding.” This allows providers with a consistent history of correct approvals to be exempt from prior authorisation requirements. Agentic AI could assess whether a provider still qualifies for this exemption, introducing real-time performance monitoring and ensuring consistency in decision-making.

As these techniques mature, they will bring more scientific rigour to administrative and operational processes in healthcare, reshaping the industry by allowing AI to handle routine tasks and free clinicians to focus on complex, human-centric aspects of care.

What would be your recommendations for designing an AI-driven healthcare system that is resilient to data variability and integration gaps?

Advertisment

To design a resilient AI-driven healthcare system, the first step is implementing robust data quality checks at the point of data entry. Ensuring that data meets minimum quality standards is essential for the accurate functioning of both Agentic and Generative AI systems. Without proper validation, there’s a risk of making decisions based on flawed assumptions, especially when real-time operational data diverges from the training data.

Continuous evaluation of AI model performance is also crucial. It’s not enough to focus on outcomes alone; AI models must be monitored regularly for performance in real-world conditions. This includes tracking the reliability of predictions and the model’s behaviour over time. When clinicians override or approve a model’s decision, this feedback loop must be captured to retrain and refine the model.

The broader healthcare ecosystem must be taken into account. Data flows across various stages and systems, and this interconnectedness demands a holistic approach to governance, model monitoring, and feedback loops. Moreover, consumer health devices, such as smartwatches and Bluetooth-enabled blood pressure monitors, are generating a growing volume of valuable health data outside clinical settings. However, this data is often not generated by FDA-approved devices in controlled environments, leading to concerns over its trustworthiness.

To incorporate this patient-generated data into clinical decision-making, clinicians must learn to trust it. AI can play a vital role in helping clinicians navigate these new data sources, validate their accuracy, and integrate them into care processes. AI’s adaptive feedback and real-time learning capabilities will be essential in building trust in this data.

Ultimately, a resilient AI-driven healthcare system must balance precision with pragmatism. AI should augment care in imperfect data environments, helping clinicians make informed decisions even when data is incomplete.

What approach is CitiusTech taking in terms of AI model validation and benchmarking, particularly when datasets are incomplete or unavailable? Are there any tools or frameworks you would recommend for assessing the robustness of AI outputs derived from such imperfect data?

At CitiusTech, we emphasise establishing a robust AI governance model to ensure the responsible adoption and scaling of AI solutions. This governance involves stakeholders from compliance, IT security, data governance teams, and clinicians, especially when clinical data is involved. AI governance should go beyond traditional data warehousing frameworks, focusing on how AI is adopted, adapted, and scaled within healthcare organisations.

We recommend the creation of an AI Centre of Excellence (COE), which should consist of skilled AI professionals capable of interpreting data, evaluating models, and designing solutions aligned with healthcare workflows. The right talent is crucial for executing AI transformations effectively.

In addition to governance and talent, we provide AI accelerators, prebuilt algorithms and data models based on proven use cases. These accelerators help clients quickly move from data collection to actionable insights, making it easier to derive value from existing data. Even with incomplete data, the combination of 30 or more data points can form a reliable, explainable recommendation.

To manage incomplete or imperfect data, we recommend starting with lower-risk use cases, such as administrative or IT tasks. These tasks serve as training grounds for teams to gain experience, refine processes, and build trust in AI systems before applying them to more complex clinical decision-making.

Could you share how the regulatory environment is evolving to support AI applications, especially in the US and other major healthcare markets?

The regulatory landscape is evolving to accommodate the increasing role of AI in healthcare. For example, electronic prior authorisation is set to become mandatory next year. This regulation requires prior authorisations to be transmitted digitally between payers and providers, enabling more efficient decision-making through AI tools. Transparency of algorithms is now a regulatory requirement, ensuring that decision-making logic is clear not only to clinicians but also to patients.

As regulatory bodies like the Centres for Medicare & Medicaid Services (CMS) set standards, private insurers tend to follow suit, creating a more standardised approach to AI applications across the healthcare ecosystem. However, regulations still require human oversight in AI-driven decisions, especially in high-stakes areas like clinical decision-making.

In the near future, we expect to see AI tools increasingly used to standardise processes such as prior authorisations, reducing variability and improving consistency across healthcare providers and insurers.

If healthcare organisations could change just one assumption or approach around AI and data integration, what would you recommend, and why?

My recommendation is to start small. Organisations often get overwhelmed by the potential of AI and attempt to implement large-scale solutions too quickly. However, AI initiatives should begin with manageable, low-risk use cases, allowing organisations to build the necessary culture, processes, and skills before scaling.

It’s essential to involve the business side early on, not just IT or data science teams. Business leaders understand the workflows and pain points and can help prioritise where AI will provide the most value. By starting small and scaling gradually, organisations can build confidence in AI and create a foundation for more extensive, impactful deployments in the future.