The systems designed to shield the world from biological catastrophe are quietly falling behind. A silent shift is underway as artificial intelligence (AI) transforms pathogen design and synthetic biology expands the toolkit for dual-use technologies with both civilian and military applications. The rules of preparedness are being rewritten by algorithms and accelerated by geopolitical urgency.
AI now sets the tempo of biological risk. Frontier models can generate pathogen blueprints and optimize lab protocols faster than trained virologists. OpenAI and Anthropic have integrated biosafety guardrails into their models, acknowledging what many now consider one of AI's most dangerous applications. The U.S. National Security Commission on Emerging Biotechnology has warned that these tools could be used to design pathogens with enhanced virulence that are technically sophisticated and increasingly accessible.
Yet, as biothreats evolve, the global health community remains largely sidelined, underfunded, outpaced, and absent from the forums shaping next-generation biosecurity.
Budget Realities
Nowhere is the mismatch between risk and readiness more clear than in the global health ledger. The World Health Organization (WHO) faces a $2.5 billion shortfall, delaying surveillance and cutting staff. The erosion follows the full withdrawal of U.S. funding, once a fifth of the WHO's total resources. The organization risks being reduced to a symbolic convener or technical consultant.
This decline comes as 124 countries approved the WHO Pandemic Agreement, touted as a breakthrough for global cooperation. Its commitments to data sharing, surge funding, and equitable access are meaningful. Yet the institution tasked with its implementation is being hollowed out in real time.
The global health community remains largely sidelined, underfunded, outpaced, and absent from the forums shaping next-generation biosecurity
The U.S. presidential budget for fiscal year 2026 (FY 2026) and the recent $9.4 million rescissions proposal [PDF] reveal a sweeping pivot in global health and biosecurity priorities. The proposed budget cuts include $18 billion from the National Institutes of Health (NIH), $3.6 billion from the Centers for Disease Control and Prevention (CDC)—including dismantling the Global Health Center—and $6.2 billion from the U.S. Agency for International Development's (USAID's) global health programs. This equates to a 62% reduction in bilaterial global health investments and a 44% overall reduction across NIH, CDC, and USAID.
Containment over Cooperation
What fills the vacuum is revealing. Although the FY 2026 budget proposes a $654 million [PDF] allocation for the Biomedical Advanced Research and Development Authority—a $360.6 million decrease from FY 2025—and $725 million [PDF] for Project BioShield—a $95 million decrease—it also selectively maintains or expands targeted investments in detection and surveillance.
A proposed [PDF] "Biothreat Radar" aims to detect novel pathogens within 24 hours. The CDC's Forecasting and Outbreak Analytics Center is maintained at $50 million [PDF]; $328 million is proposed to modernize disease surveillance infrastructure—a $30 million increase—and $750 million is designated to sustain the Strategic National Stockpile—a $215 million decrease [PDF]. The Advanced Research Projects Agency for Health that supports continued investment in biomedical innovation is allocated $945 million [PDF]—a $555 million decrease [PDF].
This patchwork of cuts and investments reflects a strategic pivot away from international cooperation and toward a more centralized biodefense posture. That shift was reinforced by the May 2025 executive order on "Improving the Safety and Security of Biological Research," which paused high-risk gain-of-function research in the United States, suspended federal funding for such work abroad, tightened oversight of high-containment labs and called for new governance of privately funded experiments.
These measures mirror the containment-focused orientation of the FY 2026 budget. The result is a redefinition of preparedness, which is increasingly nationalized and defensive, yet still fragmented and reactive and ultimately ill suited for the transnational scale of modern biological threats.
Governance Misaligned
Governance frameworks for AI and biotechnology, meanwhile, are rapidly being developed by national security agencies and tech regulators. The United Kingdom and the United States have launched AI safety institutes. The French government and philanthropic partners have dedicated $400 million to AI oversight. The United Nations, the Organization for Economic Cooperation and Development (OECD), and the Global Partnership on AI are all drafting new global norms.
In the FY 2026 budget request, AI is named a strategic priority across multiple agencies. The Defense Advanced Research Projects Agency, under the Department of Defense's $148 billion [PDF] research, development, test, and evaluation budget, is one of the few explicitly tasked with advancing AI and biotech, signaling a long-horizon, defense-aligned innovation track over near-term public health integration.
The WHO contributes technical input but holds no decision-making power in these processes. Public health is largely excluded from shaping how dual-use risks are governed or financed.
This omission is a strategic miscalculation. AI and synthetic biology are not just technical risks, they are geopolitical accelerants. As these technologies reshape global power and systems, public health need to be part of the governance equation.
Fragility Is the Frontline
Fragile states may not originate engineered threats, but they are where such threats can do the most damage . Weak health systems, political instability, and limited surveillance can turn outbreaks into humanitarian catastrophes.
Sudan offers a warning. In 2023, as civil conflict escalated, armed forces seized Khartoum's National Public Health Laboratory, which contained live samples of cholera, polio, and measles. Power outages threatened containment. Social media was flooded with misinformation. Clinics were looted, health workers displaced, and vaccination campaigns suspended.
Although health system collapse in conflict settings is tragically familiar, new threat vectors are gaining speed. In 2017, researchers synthesized horsepox using publicly available data and mail-order DNA. In 2022, a generative AI model produced 40,000 toxic molecules, akin to chemical weapons, in under six hours.
Sudan was not an AI or synthetic-driven crisis, but it shows how biothreats, disinformation, and governance failure can collide. The next outbreak is likely to be digitally augmented. This potential should raise urgent questions about how warfare and fragile dynamics could escalate the confluence of threats posed by AI and health system deficits. Integrated responsive measures should be advanced enough to act before technology outpaces control.
Accountability in the Age of AI
As fragility rises, funding recedes, and blind spots widen, the consequences will fall hardest on the systems least able to bear them.
A true biothreat will not respect borders. Fortress strategies—investing in domestic containment while defunding global preparedness—abandon the majority. In a hyperconnected world, abandonment is not a strategy; it's a vector. Public safety will be only as strong as the weakest link in the global chain.
The question is not whether global health should have a seat at the table. It is whether it will reclaim its role in defining preparedness amid a rapidly shifting risk landscape, where new norms are moving faster than the ability to govern.
If current trajectories hold, the rules of tomorrow will be written by and for those who hold power, not those with experience responding to crises, nor those most accountable to the people who bear the brunt of these risks.
What's Next
All hands need to be on deck to shape tomorrow's flow of information, political will, and financing—not only to improve safety, but also to enhance cost-efficiency through integrated surveillance, faster response, and reduced reactive spending.
Reclaiming leadership means showing up where risk governance is being defined. Global health actors should engage in AI safety forums, biosafety negotiations, and dual-use policy design. It also means advancing algorithmic governance that is anticipatory, transparent, and globally coordinated. It requires embedding public interest safeguards into AI systems for early warning and bio-surveillance while aligning financing with real-time detection. Fragile settings should be treated not as outliers, but as the frontline of preparedness.
Societies don't just need more vaccines or better code. They need integrated systems that detect early, respond fast, and scale equitably. The next catastrophe won't be stopped by what has been stockpiled, but by whether systems capable of acting before chaos takes hold have been built.
If the global health community does not step into the void, others will define what preparedness means and whom it is designed to protect.