When the United Nations General Assembly (UNGA) meets in New York this month, the spotlight will fall on noncommunicable diseases (NCDs)—the leading killers worldwide, from cancer to diabetes to heart disease. Heads of state will debate how to mobilize financing, strengthen health systems, and expand prevention at scale. Alongside these high-level meetings, another conversation will quietly shape governance and donor discussions: how artificial intelligence (AI) can be responsibly deployed to improve health.
From predicting malaria outbreaks to supporting HIV prevention to speeding tuberculosis (TB) diagnosis, AI no longer belongs to the future—it is already being tested, with mixed results, in some of the toughest environments for health delivery.
Over the past few years, AI has begun reshaping the building blocks of disease response. In Uganda, Makerere University's AI Health Lab has built algorithms that read blood smears and diagnose malaria with high accuracy and speed. Deep-learning systems such as YOLOv5 paired with transformers have reached expert-level performance in parasite detection in West Africa. For TB, a recent federated learning project across eight African countries is enabling hospitals to interpret chest X-rays collaboratively—without sharing raw patient data. At Audere, we are reimagining HIV prevention in Zimbabwe and South Africa—testing empathetic AI-powered chatbots to connect young people to timely, confidential information.
AI Buzz Is Coming to Terms with Health Reality
These efforts illustrate AI's potential to make health programs more efficient, reduce costs, and extend the reach of scarce human resources. Automated image recognition can save overburdened lab technicians hours of work. Chatbots can provide first-line answers to health questions, freeing clinicians to focus on complex cases. Predictive models can help ministries of health allocate limited test kits or medicines more strategically. All of these activities promise improved efficiencies and value for money in systems where every dollar and every health worker hour matters.
Without this discipline, AI risks becoming another distraction at a moment when global health systems can least afford it
The proliferation of AI projects across Africa reflects not opportunism but instead necessity. Where clinician shortages are chronic, lab services are scarce, and outbreaks escalate quickly, AI offers efficiency and the chance to leapfrog traditional bottlenecks. High rates of mobile connectivity create a foundation for digital solutions that might not be possible elsewhere. As the phase of speculating about what AI could do moves to the past, the ethical challenge becomes deciding what it should do for health programs.
That decision is becoming more urgent. This year, donor funding has shifted dramatically—from the uncertain reauthorization of the President's Emergency Plan for AIDS Relief (PEPFAR) to proposed cuts in global health budgets. These shortfalls have heightened the stakes for ethical AI deployment in Africa and other regions.
Budget pressure should not drive adoption of flashy tools just to prove innovation. The first questions must always revolve around what health system problems are being solved. Faster TB diagnosis? Stronger HIV prevention outreach? Supplementing the growing human resource gap? More equitable malaria detection? Without this discipline, AI risks becoming another distraction at a moment when global health systems can least afford it.
Anchoring AI Ethics Through Design
An essential facet of AI health care is distinguishing between tools and products. Tools are the raw engines: large language models such as ChatGPT, Claude, and Gemini, but also computer vision systems that read X-rays, natural-language processing models that scan medical records, and predictive analytics that flag patients at risk. These generic capabilities are powerful but not tailored. Products, by contrast, are built with those tools: carefully designed interventions for specific programs, populations, and health system gaps. The distinction matters. A tool can crunch data. A product should safeguard privacy, align with clinical guidelines, and meet people where they are.
AI is not a silver bullet. It cannot and will not erase funding gaps, substitute for trained health workers, or build the infrastructure that health systems still desperately need. Algorithms are only as good as the data behind them, and without careful oversight they can drift, misclassify, or amplify inequities. For AI to strengthen rather than undermine health systems, ethics need to be built into design from the start.

That begins with privacy as a default. In South Africa, for example, de-identified data is sent to external models and identifiable records remain encrypted in local servers, compliant with South Africa's data privacy law (POPI Act). Users are taught to lock chats or enable disappearing messages, and only language models with explicit policies against reusing training data are deployed.
Ethical AI also requires active monitoring for bias and harm. Human oversight is built directly into the system. This "human in the loop" approach means that AI doesn't operate alone—trained people are always available to step in when the technology flags a concern. AI responses are monitored in real time for tone, empathy, and subgroup outcomes to catch subtle patterns of bias or harm. When a response risks being stigmatizing or harmful, it is automatically flagged. In higher-risk cases—such as indications of self-harm or violence—the system both routes an alert to a clinician who can intervene and protects user privacy by keeping phone numbers and personal identities hidden.
Transparency and accountability must be ongoing, not one-off. Outputs should be validated against clinical guidelines, with continuous monitoring for drift. Codesign with ministries, implementers, and communities ensures that the technology reflects local realities rather than external assumptions.
Philanthropy and Donor Momentum
Philanthropy remains catalytic—often funding experiments before public institutions can move. The Gates Foundation has supported AI across malaria, HIV, and health system strengthening, including work exploring how gender and autonomy shape women's health. The Patrick J. McGovern Foundation has invested $73.5 million in no-code platforms that democratize AI model creation, lowering barriers so local organizations without deep technical expertise can develop tailored solutions.
Meanwhile, traditional funders are beginning to catch up. The Global Fund now invests about $150 million annually in digital health tools across more than 90 countries. This effort is focused on improving surveillance, diagnostics, clinical decision support, and health worker efficiency—especially in settings with limited budgets and capacity. The strategies include AI-powered analysis of mobile chest X-rays, bringing TB screening directly to underserved communities via mobile vans.