As governments race to harness artificial intelligence for public services, the idea of “AI sovereignty” has moved from abstract principle to urgent policy question. For resource‑constrained states, the challenge is not to own every layer of the AI stack. It is to make deliberate, defensible choices about where dependence is acceptable—and where it threatens public legitimacy, continuity, or national interest.
In 2026, sovereignty is less a destination than a negotiation.
DPI, DPGs, and the AI Turn
Many countries now look to India’s Digital Public Infrastructure (DPI) and Digital Public Goods (DPGs) as a template for Bottom‑of‑the‑Pyramid innovation. The appeal is clear: DPI scales public value first, vendors second. Its emphasis on open standards, modularity, and interoperability aligns naturally with AI deployment in identity-linked services, eligibility and targeting, and language accessibility.
But AI introduces new choke points that DPI never had to confront. Compute concentration, cloud dependency, and the geopolitics of chips shape what is realistically deployable. Even open-source models carry external dependencies through upstream updates, tooling ecosystems, and embedded assumptions. DPI and DPGs remain powerful benchmarks—but they are not blueprints. They offer principles, not plug‑and‑play architectures.
Where Sovereignty Actually Lives
The sovereignty debate often fixates on models, but the real story lies across the stack.
Compute is the deepest structural dependency. Cloud reliance exposes governments to jurisdictional risk, pricing volatility, and service discontinuity. Hardware access is shaped by export controls and supply-chain politics far beyond the control of small states.
Models offer an illusion of control. Open weights do not guarantee sovereign capability when fine‑tuning pipelines, evaluation frameworks, and safety updates remain externally shaped. The real risks sit in the hidden design choices that travel with the model.
Deployment is where governments retain meaningful leverage. Use‑case design, workflow integration, human oversight, and redress mechanisms are sovereign domains. Sovereignty usually fails after deployment, not at model selection.
Open Source: Value and Limits
Open-source AI brings real advantages: transparency, local language adaptation, reduced vendor lock‑in, and opportunities for public‑sector capacity building. But it does not eliminate dependency. Compute costs dominate total cost of ownership. Tooling ecosystems become sticky. Liability shifts to the state. Standards and safety norms are set elsewhere.
Open source lowers entry barriers; it does not remove sovereignty risks.
Accountability and Trust
When public AI fails, citizens do not care where the model came from. They see the state. Accountability cannot be outsourced to vendors or upstream communities. Trust is political, not technical. Governments must invest in clear decision boundaries, audit trails, documentation, and meaningful human review. Redress is not an afterthought—it is the backbone of legitimacy.
A Pragmatic Path Forward
For policymakers, the task is to be selective about sovereignty: control the layers that shape public legitimacy, anchor AI in existing DPI, avoid parallel systems, design for failure, and build institutional capability. In the end, the real moat is not model ownership but state capacity.
AI sovereignty in the Global South is an ongoing negotiation between capability, constraint, and legitimacy—and success lies in the ability to make informed, accountable choices that serve the public interest.