False negatives are more dangerous in high-stakes situations.
While the debate gained prominence for laymen with the rising popularity of the Artificial Intelligence discourse, it has been a long-standing one in the medical fraternity.
In 2008, Jerome Groopman, who holds the Dina and Raphael Recanati Chair of Medicine at Harvard Medical School, wrote a book titled How Doctors Think where he argued the forces and thought processes behind the decisions doctors make.
When AI enters the conversation, researchers are inadvertently faced with the asymmetry of errors in the weight or impact it carries towards the industry it is applied to.
“In many industries, systems can be released and refined while in use. In healthcare, that approach does not hold. The tolerance for error is significantly lower — closer to launching a rocket than shipping software,” explains Matthew Hellyar, Strategic Partner at Respocare, and Chief Developer & Founder at Respocare Connect AI.
Respocare is a South African healthcare group with operating interests in patient care services, healthcare intelligence media, and clinical AI systems. Their new venture Respocare Connect AI reflects the company’s clinical AI and agentic systems division, focused on developing tools that help healthcare professionals reduce administrative burden, improve workflow efficiency, and support clinical decision-making.
“Healthcare is fundamentally different because the objective is singular: the patient,” he says. “Every system, workflow, and decision ultimately converges on patient outcomes. That level of responsibility changes how AI must be designed, tested, and deployed — and it changes what human oversight actually means.”
AI & Data Insider spoke to Hellyar about the application of AI in healthcare, and the distinct approach when testing for successful clinical application.
Full interview;
Tell us about your experience building trust with HCPs to adopt new technologies to accept help with clinical decision support for triage. How do you overcome this?
Trust in clinical AI comes down to one principle: authority must remain with the clinician. Healthcare professionals are not resistant to AI — they are responsible for outcomes. Any system introduced into their workflow must reinforce, not dilute, that responsibility.
In our experience, when approached correctly, curiosity tends to outweigh resistance far more often than the industry narrative suggests.
Our approach with Respocare Connect AI is built on three pillars: transparency, traceability, and data ownership. The system does not make decisions. It structures and returns intelligence to the clinician.
Every document, note, and output remains fully visible and traceable — nothing is hidden behind a black box. This is not a design preference; it is a clinical safety requirement.
In our Series 4 structured ethical AI evaluation, we processed 28 clinical documents across four visits for a reference patient, tracking outcomes including allergy management, thyroid function, HbA1c, and blood pressure longitudinally. Zero hallucinations were recorded. More importantly, every output was fully provenance-linked — the clinician could see exactly which source document informed each response. That level of traceability is what shifts trust
from theoretical to demonstrated.
ALSO READ: Data as the New Diagnostic: How Ahead Health is Turning Algorithms Into Preventive Care
We have also committed to building in public. By openly demonstrating how the system behaves — where it performs well and where it is still evolving — we remove opacity and replace it with clarity.
Clinicians respect honesty. They do not need a perfect system; they need a predictable one. When authority, transparency, and data ownership are preserved, trust follows.
Where do you see the biggest impact of agentic AI in clinical trials — is it in helping with the admin burden, processing large datasets at speed, or elsewhere?
To understand the impact, it is important to first define what we mean by agentic AI — because it is not simply a model generating outputs.
Agentic AI is the integration of intelligence with systems and tools, operating within defined boundaries. This includes guardrails, entitlements, structured reasoning, and memory — enabling the system to act in a governed, contextual way rather than as a standalone interface that processes isolated queries.
In a clinical trial context, this becomes particularly powerful.
Consider a clinician managing multiple patients who may be eligible for a respiratory trial. Instead of manually reviewing notes, imaging, and historical data across fragmented systems, an agentic system can:
– Access only the patients assigned to that clinician — governed by entitlements
– Retrieve and synthesise structured and unstructured records across time — using persistent memory
– Evaluate eligibility criteria against real longitudinal patient data — through structured reasoning
– Surface candidates with clear justification and source references — enforced by guardrails
Simultaneously, the system structures documentation continuously — visit notes, summaries, and compliance records — reducing the administrative burden that typically surrounds trial management.
But the most significant shift is not speed, and it is not admin reduction alone. The biggest impact is continuity.
We are moving from systems that analyse snapshots of data to
systems that manage clinical information longitudinally — and that changes everything about how trials can be run.
A clinician no longer needs to reconstruct context at every interaction. The system maintains an evolving understanding of both the patient and the trial requirements over time. That is where the real leverage is.
What is the main difference in the role of the human-in-the-loop for healthcare, as compared to other use cases?
Healthcare is fundamentally different because the objective is singular: the patient.
Every system, workflow, and decision ultimately converges on patient outcomes. That level of responsibility changes how AI must be designed, tested, and deployed — and it changes what human oversight actually means.
In many industries, systems can be released and refined while in use.
In healthcare, that approach does not hold. The tolerance for error is significantly lower — closer to launching a rocket than shipping software.
Systems must be tested, validated, and behaviourally understood before they are relied upon in real clinical environments.
This brings me to a point that I think is underappreciated in the broader conversation about clinical AI: the field is focused almost entirely on hallucinations. While hallucinations are important, they are only part of the challenge.
In healthcare, behaviour is equally important — and arguably harder to govern.
How does a system respond under uncertainty?
How does it handle missing data?
When does it defer rather than generate?
A system that appears accurate most of the time but behaves
unpredictably in edge cases introduces serious clinical risk. We treat behavioural consistency as a first-class design requirement, on par with factual accuracy.
We have seen this validated in our own clinical testing. Hallucinations were far less prevalent than we anticipated. The real-world challenges came from integration nuance, edge case behaviour, and handling of incomplete data — exactly the dimensions not captured by standard benchmark evaluations.
In healthcare, the clinician is not a checkpoint — they are the decision-maker. AI supports and extends clinical reasoning. Authority never transfers.
What have been some of the key learnings through this journey over the last few years?
One of the most surprising aspects of this journey has been the level of genuine enthusiasm from healthcare professionals. There is a persistent industry narrative that clinicians are broadly resistant to AI.
In our experience, that has not been the case — and I think that
narrative does clinicians a disservice. They are not resistant; they are rigorous. Those are very different things.
What has been more humbling than expected is the infrastructure problem. The difficulty is not the AI. It is the system around it.
The intelligence models have been highly capable. What has required the most sustained attention is structure: how data is ingested and organised, how systems communicate between frontend and backend, how reasoning is constrained within safe and clinically usable boundaries. Getting the AI to reason well is, in some ways, the easier part. Getting it to behave consistently within a governed clinical system — that is the harder engineering problem.
There was also a moment early in our clinical testing that genuinely changed how I think about this work. We encountered a case where the system surfaced a resolved clinical item as an active concern — not a hallucination, not a factual error, but a logic gap in how the system reasoned about time and resolution. The output looked correct on the surface. It was the clinical implication that was wrong.
That experience reinforced something we now treat as a design principle: clinical AI must reason about the state of the patient, not just the content of the record. Those are meaningfully different things, and the gap between them is where patient safety risk lives.
The broader learning is this: AI requires context, structure, and disciplined implementation — much like a skilled human colleague would. The systems that will earn clinical trust are the ones built with that humility.
Can you take us through the onboarding process for a clinician opting to use Respocare Connect AI — what does their team need to look like, and how long does integration take?
We have designed onboarding to feel less like installing software and more like introducing a new clinical capability — because that is precisely what it is.
From the outset, the experience is personal. Clinicians entering early access are welcomed directly, with real human support — often starting with a direct call.
Alongside this, we provide structured onboarding guides and ongoing education through The Agentic Report and Respocare Insights.
The technical onboarding itself is straightforward:
– Profile setup — defining role, specialty, and clinical context
– Patient creation — establishing the first patient record within the system
– Data input — uploading existing documents or using voice-based scribing to begin building context
That last step is critical.
Respocare Connect AI is not a generic tool — it is an agentic ecosystem that depends on context. The system becomes genuinely valuable the moment it understands the patient. From there, clinicians interact with the agentic clinical assistant to query patient data, generate summaries, and receive structured insights grounded in real longitudinal records.
On the team question: this is one of the most important things to get right. Respocare Connect AI is designed to be adopted without requiring a dedicated IT team or data engineering resource at the clinic level.
A single clinician can onboard independently. For larger practice groups or clinic networks, a practice manager or administrator is sufficient to coordinate data input and manage access.
We have specifically avoided building a system that requires specialist implementation partners to deploy — that model creates barriers that most private practices cannot sustain.
On timeline: a clinician can be functionally active within the system in a single session.
Meaningful clinical value — where the system has sufficient longitudinal context to surface insights — typically develops over the first two to four weeks of active use, as patient records are built out. For clinicians migrating existing records, that timeline compresses significantly.
The shift we are introducing is not just efficiency — it is clarity — moving from fragmented information to a system that organises clinical intelligence in real-time, built around the way clinicians already think and work.
Respocare Connect AI is currently in Phase 2 clinical trials and early access.
ALSO READ: From Renders to Data Layers: How AI Is Reshaping Architecture’s Visualisation Stack
