The greatest opportunity of using Artificial Intelligence in the health sector lies at the community level, where it can strengthen health literacy and democratise access to medical information, Prof. Annie Hartley, a medical doctor and director of the Laboratory for Intelligent Global Health and Humanitarian Response Technologies (LiGHT) at the Lausanne-based Swiss public research university EPFL, told Soumyarendra Barik and Anil Sasi. Hartley is attending the AI Impact Summit.
What was the starting point of your project on translating AI to clinical practice?
“Thou shalt not eat insulin on a Tuesday.” That was the answer a chatbot gave when my medical colleague in Ethiopia asked, in her own language, how to manage a diabetic crisis in a child. It likely reflects the reality that, in that language, among the little digitised text available for AI to train on is the Bible.
Frontier AI systems are often impressive. But their accuracy is unevenly distributed. And in low-resource settings, errors carry heavier consequences. There are fewer specialists, fewer safety nets, and less margin for harm.
LiGHT’s initiative MOOVE, which stands for Massive Open Online Evaluation and Validation (a community-driven expert evaluation of specialist AI, which nudges large language models toward alignment ensuring transparency, contextualisation, and ownership), emerged from that need. It is a structured framework to test AI in real clinical settings, measure where it works and where it fails, and feed that evidence back into models to improve contextual relevance.
In your experience so far trying to use AI in under-resourced and under- represented communities, what has been the response to such tech?
The response in under-resourced settings is typically much more pragmatic than ideological. Health workers adopt tools that function reliably in volatile environments: low connectivity, high workload, and limited infrastructure. They value systems that are freely accessible, low-friction, robust offline, and responsive in their own language, especially through voice-enabled interaction. They strongly resist duplicate workflows and any added administrative burden that does not clearly improve patient care.
Story continues below this ad
Increasingly, clinicians are also asking deeper technical and policy questions: where is our data going, who owns it, who benefits from these multimillion-dollar systems, and what are their incentives? Ownership, governance, and public benefit are becoming primary concerns. That shift is fuelling demand for co-design, transparent validation, and benchmarks to support evidence-based choice in an increasingly noisy market.
What parts of the healthcare system in developing economies are ripe for disruption due to AI? Is it tricky in places like India, where many people may not have a high degree of technical proficiency?
I think the greatest opportunity lies at the community level. AI has the potential to strengthen health literacy, improve patient navigation, and democratise access to reliable medical information.
Technical proficiency is no longer the main barrier. Few people were trained to use messaging apps like WhatsApp; they learned intuitively.
Story continues below this ad
The more pressing challenge is not user proficiency but model proficiency: accuracy in low-resource languages, sensitivity to accent, and cultural nuance.
With general purpose generative AI platforms readily available to a wide audience, including children, what are some risks you foresee?
I worry that we do not yet know how much to worry…We lack systematic evidence about how these tools influence health decisions, delay care, shape mental health, or alter help-seeking behaviour. Without evidence, policy risks swinging between overly restrictive and dangerously permissive.
Global health research bodies and NGOs have often faced criticism for being one-sided or even extractive. Where do you see this debate headed?
Story continues below this ad
The debate has shifted from whether to use AI to who controls it, who benefits, and who is accountable. That is progress.
Criticism of extractive AI models is often justified. Too often, data leaves a country, models are trained elsewhere, and local health systems receive a finished product without ownership, capacity building, or governance authority. This pattern concentrates power while externalising risk. Public data should serve the public good. That requires infrastructure designed for transparency and shared control. This is where fully open models matter.
Initiatives such as Apertus, developed through a collaboration between the Swiss Institutes of Technology (ETH Zürich and EPFL) and the SwissAI initiative, demonstrate that high-quality AI can be built and released as public infrastructure. Fully open means more than publishing results. It includes clear documentation of training data sources and limitations, open model weights, reproducible training pipelines, licensing that permits adaptation, and the ability for independent groups to audit, fine-tune, and redeploy systems locally.













