In the shadow of the Himalayas, a new frontier is emerging—one not of rock and ice, but of algorithms and data. Artificial Intelligence (AI) is no longer a distant concept from science fiction; it is a present-day reality with the power to reshape Nepal's economy, society, and governance. From optimizing agriculture and revolutionizing healthcare to personalizing education and streamlining public services, the opportunities are immense.
But with great power comes great risk. The global conversation has moved beyond fears of mere technical failure to a more profound concern: "artificial intention." This is the danger that AI systems, driven by technical ambition and corporate profit, may operate without genuine ethical intent, exacerbating inequalities, eroding privacy, and making decisions that affect lives without transparency or accountability.
For a country like Nepal, with its unique social fabric and developing digital infrastructure, these risks are not hypothetical—they are urgent. The time to act is now, before these technologies become deeply embedded without a guiding framework. Nepal has an urgent need to establish a formal architecture for AI governance—a structure designed to provide coordination, scientific grounding, and, most importantly, a moral compass for our technological ascent.
The Policy Blueprint is Ready: Now for the Architecture
The good news is that the foundational policy is in place. Recognizing the stakes, the Council of Ministers approved the National Artificial Intelligence (AI) Policy, 2025, on August 11, 2025 [1.1]. This is a monumental first step, outlining a vision for the establishment of an AI Supervision Council and a National AI Centre to oversee governance and implementation incorporating of AI education into the national curriculum to cultivate a sustainable workforce with sectoral Integration with strategic use of AI in critical sectors like agriculture, health, and disaster management.
These are commendable first moves, but they are just that—a blueprint. The current policy landscape must be reinforced with the detailed enforcement mechanisms, funding roadmaps, and robust ethical safeguards needed to navigate the complex challenges ahead. Nepal's low global ranking in AI readiness (150th out of 193 nations as of 2024) serves as a clear wake-up call .
Building the Architecture: A Three-Pillar Approach for Enforcement
To move from approved policy to effective national defense, Nepal must build its governance architecture on three core pillars:
- The Evidence Engine: Institutionalizing Impartial Expertise
The newly proposed AI Supervision Council must be empowered to act as the nation’s "evidence engine"—impartial and technically rigorous.
- Mandate for Independence and Accountability: The Council must be structured to include independent academic researchers and provincial representatives, ensuring members serve in their individual, expert capacity, free from political or corporate pressure. This body must be the designated authority for addressing harms and biases, filling the current gap in accountability mechanisms.
- Proactive Risk Mapping: Its primary mandate must be to monitor and provide a "right to explanation" for algorithmic decisions across all seven provinces. This means leveraging AI tools for poverty mapping and disaster risk assessment while rigorously auditing these same tools for bias that could misclassify vulnerable or marginalized populations .
- Fostering Local Skills: The Council must aggressively implement the policy's goal to train 5,000 AI professionals within five years. Crucially, this requires specific, competitive financial strategies—beyond simply integrating curricula—to prevent the "brain drain" of skilled experts to other nations.
- Anchoring Ethics in Nepal's Constitutional Fabric
The fight against "artificial intention" must be rooted in our own values. Nepal’s governance framework must explicitly translate constitutional rights into algorithmic requirements:
- The Foundational Data Protection Act is Non-Negotiable: While the existing Individual Privacy Act, 2075 (2018) provides a foundation, it lacks the necessary institutional teeth and modern protections to manage AI's data appetite. The National AI Policy acknowledges this need [1.3]. Without a strong, comprehensive, modern Data Protection Act and an independent regulatory authority with clear enforcement powers, the sensitive data required to power national AI systems remains vulnerable to misuse, destroying the essential public trust.
- Mandatory Human Oversight and Transparency: For all "high-risk" applications (e.g., in healthcare diagnostics, policing, or judicial recommendations), the law must mandate Meaningful Human Review [3.2]. The AI's output can never be the final decision; a human professional must retain full final authority and accountability, particularly given the risks of deepfakes and AI-driven surveillance [1.3].
- Learning from the World, Building for Nepal
Nepal can leapfrog pitfalls by adapting global lessons to its unique context, moving beyond the exploration phase that most governments are currently in.
- Regulatory Sandboxes for Local Startups: To nurture the domestic AI ecosystem, the government must utilize the policy's planned regulatory sandboxes. These safe, limited testing environments allow small local firms to test high-risk applications (like predictive agriculture models) under expert supervision before general deployment, supporting innovation while maintaining strict control.
- Unifying Standards and Cross-Border Risk: The National AI Centre must proactively align Nepal's standards with international frameworks (like those from UNESCO and the OECD). This makes our AI systems interoperable, facilitates international investment, and ensures ethical baselines are globally respected. Furthermore, Nepal must prioritize regional cooperation in South Asia to address cross-border AI risks like coordinated misinformation and cybersecurity threats .
Integrated AI Governance: Healthcare as a High-Risk Test Case
The integration of AI into Nepal's healthcare system (from remote diagnostics to predictive analytics) offers immense opportunities to overcome infrastructure gaps, but it poses significant risks. The proposed Three-Pillar Architecture is essential for managing these risks, particularly the danger of "artificial intention" impacting patient care.
Managing Algorithmic Bias in Diagnostics: The AI Supervision Council's mandate for Proactive Risk Mapping must prioritize auditing AI models used for diagnosing prevalent diseases (like Tuberculosis or diabetic retinopathy). It ensures these models are not biased against populations with limited healthcare access, which could lead to misclassification of vulnerable or marginalized patients.
Transparency in Patient Decisions: Requires the Council to ensure that every AI-assisted diagnostic output can be explained to both the doctor and the patient, filling the current gap in accountability for technical failure or bias.
Anchoring Ethics in Constitutional Fabric and Protecting Sensitive Medical Data: The urgent need for a strong, comprehensive Data Protection Act is most acute in healthcare, where data (biometric, diagnostic, sensitive health conditions) is involved. Without this law, the sensitive patient data required to train and run national health AI systems remains vulnerable to misuse, destroying public trust.
Ensuring Final Human Accountability: For high-risk applications like surgical robotics or clinical diagnostic recommendations, the law must mandate that the AI's output is never the final decision. A doctor must retain full, final accountability and authority, safeguarding against risks like deepfakes influencing clinical judgment.
Nurturing Local Health-Tech: The Regulatory Sandboxes allow local health-tech startups to safely test high-risk applications (e.g., predictive disease models or remote screening tools) under expert supervision before deployment. This encourages domestic innovation tailored to Nepal's specific disease profiles and social context.
Interoperable Health Systems: The National AI Centre must align Nepal’s standards with international frameworks (like the OECD) to ensure AI systems are interoperable and adhere to global ethical baselines, which is critical for Telemedicine and receiving foreign investment in health-tech.
The Foundational Challenge: Bridging the Digital Divide
The success of AI in healthcare is entirely dependent on addressing the Foundational Enabler: Bridging the Digital Divide.
The high potential of AI to support remote diagnostics and telemedicine in rural communities cannot be realized if the current reality of low fixed broadband access (as of 2025) persists. If AI-powered diagnostics are only accessible in connected urban hospitals, it will not bridge the existing urban-rural healthcare gap; it will simply create a more dangerous "AI divide," where only the digitally connected elite benefit from advanced health technologies.
In summary
Nepal's AI Governance Architecture, if properly implemented through these three pillars, will serve as the necessary moral compass to ensure that the transformative power of AI in healthcare is used to equitably reduce, rather than exacerbate, existing societal inequalities. The journey to governing AI is not a task for the government alone. It is a collective responsibility. The policy has given us the roadmap. Now, we must collectively lay the concrete for the architecture. Let us not be passive witnesses to the AI revolution. Let us be its architects.