
Author: Michael Tjalve, PhD, Board Chair, Spreeha Foundation
Bolstered by more capable foundation models and an intuitive, conversational interface, modern AI is reaching accelerated adoption across all sectors, providing tangible value for a broader set of users. Healthcare is no exception.
However, as we let AI play a more prominent role in our lives, we also open the door to potential new risks. This is especially the case when we allow AI to take a front seat in decision-making processes by accepting AI recommendations without sufficient critical evaluation. We argue that this risk stems not from AI itself but from a deep-rooted human preference for cognitive shortcuts combined with the illusion of certainty and, often, time pressure to deliver on tasks.
This over-reliance on AI can lead to errors and an erosion in trust. At Spreeha Foundation, operating tech-enabled primary care in resource-constrained settings, this question directly shapes our clinical workflows.
AI has already proven its value in healthcare through integration into processes like assisted diagnostics and medical transcription. At Spreeha Foundation, we have integrated AI in meaningful ways on multiple fronts. Given the current doctor-to-patient ratio, a doctor in Bangladesh is on average able to spend only 48 seconds with each patient, so one key motivation for integrating AI capabilities into our processes has been to find ways we can give time back to the doctor/patient relationship.
We have an app that leverages AI to assist doctors with critical and time-consuming tasks before and after patient consultation.
We will also be developing AI-enabled tools for clinic management and for patients.
AI has evolved to offer remarkable capabilities but despite its name, artificial intelligence is not intelligent. It is just really good at guessing and identifying the most relevant answer. An AI model learns from data presented to it during model training and it uses this knowledge to try to identify patterns in the user input that match something it has seen before to calculate the most likely correct output. The next best action. The most likely next word. However, keeping in mind that it will always be imperfect and understanding how an error in AI output can materialize as real-world consequences gives you a better chance of mitigating the risks.
It is in our nature to seek cognitive shortcuts, a mostly unconscious process. This serves us well since it allows us to conserve mental focus on where it matters most. However, in an increasingly digital work environment, what matters most is often opaque or motivated by short-term priorities. That, combined with a tendency to position AI as an all-knowing black box, can lead to trusting AI systems more than we should, through authority bias and automation bias.
This cognitive miser, or human laziness, is to be expected and it takes conscious effort to counter it. AI output often comes with references to source material but in practice these are not always verified – especially under time pressure. For healthcare practitioners, the potential cost of leaning towards accepting AI output is higher than in most other sectors.
At Spreeha Foundation, we approach AI adoption with the assumption that risks are predictable and therefore manageable. When we can clearly define risk areas, we are better positioned to mitigate them. This begins with building awareness among our teams about how AI makes decisions and mistakes along with guidance on how to question assumptions and verify outputs.
We have embedded these principles into organizational safeguards, starting with an internal AI policy with guidance on which tools are approved for which use cases and using which data as well as when human oversight is required. Inherent human behavior is harder to change but the AI system should account for it. This, then, becomes a design challenge. One that must anticipate cognitive shortcuts and build systemic resilience around them.
In practice, this means introducing friction where the cost of error is high. For example, rather than having a user interface that allows single-click acceptance of AI-generated content, we introduce a manual step that forces the providers to actively review it before they are incorporated into patient care.
For diagnostic vigilance, we have implemented a human-in-the-loop model in which all AI driven recommendations are reviewed by qualified clinical staff. We train healthcare providers not only to use digital tools, but to interpret them through the lens of patient dignity and empathy. We explicitly position AI as a collaboration tool – one that augments clinical judgment rather than replacing it.
This is still not a perfect approach and we keep learning. Meanwhile, we maintain that the most powerful part of the equation is the human connection and the trust between healthcare provider and patient.
Author bio
Michael Tjalve brings over two decades of experience with AI, spanning applied science, research, and tech sector AI development, most recently as Chief AI Architect at Microsoft Philanthropies, where he helped humanitarian organizations leverage AI to amplify their impact. In 2024, he left the tech sector to launch Humanitarian AI Advisory, supporting humanitarian organizations in harnessing the potential of AI while navigating its pitfalls.
Tjalve holds a PhD in Artificial Intelligence from University College London and is Assistant Professor at University of Washington, where he teaches AI in the humanitarian sector. Tjalve serves as Board Chair and technology advisor for Spreeha Foundation, advancing healthcare access in underserved communities in Bangladesh. He co-leads the SAFE AI initiative promoting responsible use of AI in humanitarian action. Tjalve is co-founder of the RootsAI Foundation, a nonprofit focused on expanding access to AI for underrepresented languages and communities.
Referenced links, full URLs
48 seconds with each patient [ref: https://www.spreeha.org/blog/bangladesh-healthcare-challenges]
where it matters most [ref: https://www.tandfonline.com/doi/full/10.1080/13546783.2018.1459314]
bias [ref: https://link.springer.com/article/10.1007/s00146-025-02422-7]