Kathmandu, known as the “City of Temples,” stands as a living paradox — a place of profound spiritual heritage shadowed by a widening gap between proclaimed values and practiced integrity. This ethical crisis, marked by religious hypocrisy, has eroded public trust and weakened the moral foundations of society.
There is a need for ethical renewal in both traditional leadership and the emerging domain of Artificial Intelligence (AI) governance, particularly in the context of Global Ethics Day 2025 and the theme “Ethics Re-Envisioned in the AI Era.”
Misconduct among politicians, bureaucrats, and even religious figures has blurred the boundary between devotion and deception. Many exploit public piety as a facade to conceal corruption, power abuse, and personal gain. These contradictions—professing faith while practicing greed, preaching compassion while sowing division—fracture social harmony and corrode the essence of ethical leadership. The failure to embody honesty, humility, and justice not only betrays spiritual ideals but undermines social cohesion and responsible governance.
The New Ethical Challenge: The AI Era
With the approach of Global Ethics Day 2025, the call for moral renewal is extended to include the challenge of governing Artificial Intelligence. The focus must shift from reactive ethics (responding to harm after it occurs) to anticipatory frameworks that foresee and prevent AI-induced risks. The goal is to ensure AI, a powerful force, is guided by fairness, transparency, and equity, demanding vigilance against bias, manipulation, and discrimination.
The focus must shift from reactive ethics—responding to harm after it occurs—to anticipatory frameworks that foresee and prevent AI-induced risks. Ethical AI development requires vigilance against bias, manipulation, and discrimination, and demands renewed scrutiny of data governance amid growing digital surveillance.
Responsible AI for Nepal: Ethics, Policy, and Inclusive Governance
Responsible AI is a crucial focus for Nepal, with its unique social, economic, and cultural landscape. The push for responsible AI is a practical and tactical approach focusing on the implementation of ethical principles in the development, deployment, and governance of AI systems. Nepal has to improve the quality of education to implement such program reaping the benefit from emerging technology which has emerged like electricity. Key to this is ensuring accountability, transparency, regulatory compliance, risk management, and operational measures to prevent harm like bias or misuse. Responsible AI involves concrete steps for fairness, privacy protection, security, and sustainability. The objectives for Nepal includes understanding the importance of Responsible AI in Nepal’s context, exploring key ethical considerations in AI development and deployment, discussing Nepal's current AI policy environment, identifying gaps and opportunities, highlighting the need for inclusive governance that ensures equitable participation, especially of marginalized groups and proposing actionable recommendations for fostering ethical, policy-driven, and inclusive AI practices in the country.
AI Ethics and Responsible AI: The Distinction
AI ethics is essential for ensuring that technological advancement serves the public good by promoting human dignity, fairness, and accountability. It highlights AI's potential to strengthen governance, reduce corruption, expand access to healthcare, education, and agriculture, and empower rural populations. However, it also warns of risks such as algorithmic bias, deepfakes, electoral manipulation, and the rural–urban digital divide. Ethical AI is the broader philosophical approach. It focuses on aligning AI systems with moral principles and social values like fairness, privacy, non-discrimination, and avoiding harm. It involves considering the long-term societal impacts of AI, the protection of individual rights, and the ethical implications of AI decisions, addressing questions of right and wrong in AI usage.
There are multifaceted challenge of ensuring Responsible AI (RAI) outcomes within organizations, emphasizing a sociotechnical and holistic approach that considers people, process, and tools.
The key points revolve around accountability, AI governance, AI literacy, and dedicated leadership.
The Sociotechnical Challenge of Responsible AI
The central challenge is a technical one with significant social dimensions, requiring organizations to holistically address establishing the right organizational culture for RAI implementing effective AI governance processes utilizing appropriate AI frameworks and supporting tools.
The Problem of Accountability
When asked who is accountable for responsible AI outcomes, common but problematic answers include:
1. "No one": Acknowledged as "overly terrifying," indicating a severe lack of oversight.
2. "We don't use AI": A denial, as employees are likely using AI informally.
3. "Everyone": This often means no one is truly accountable, as the responsibility is too diffuse.
The task of those accountable for RAI is vast, encompassing:
• Changing value alignment within the organization.
• Maintaining model inventory.
• Tracking a growing number of regulations globally.
• Ensuring models are not just lawful but also ethical ("lawful but awful" models are a risk).
Championing AI Literacy and Applied Training
A critical component of the RAI solution is AI literacy, implemented through applied training for key audiences:
1. Training for AI Model Governors
This training focuses on operationalizing core RAI principles.
• Key Focus: Teaching how to operationalize principles like fairness, explainability, and transparency.
• Outcome: Defining both functional and non-functional requirements for AI systems and their use.
2. Training for Model Builders and Buyers
This training focuses on the practical application across the AI supply chain.
• Key Focus:
• Selecting appropriate AI model use cases aligned with business strategy.
• Assessing and navigating risk and unintended effects for each use case.
• Utilizing fact sheets: teaching how to build them and, crucially, how to interpret audit results to take appropriate action.
• Methodology: The training benefits greatly from a diverse and multidisciplinary team approach.
The Necessity of Dedicated Responsible AI Leadership
The text strongly advocates for a dedicated leader or team with a "fronted mandate" to oversee RAI.
Why Dedicated Leadership is Crucial
• Preventing Cracks: Without dedicated leadership, AI governance can fall through the cracks, leaving the organization vulnerable to technology-associated risks.
• Ensuring Integration: A successful RAI leader ensures that ethics is woven into the very fabric of the organization, not just an afterthought.
Responsibilities of a Responsible AI Leader
• They must have a seat at the table (within the organization's decision-making structure).
• They must ensure there are seats at the table for others, including the Chief Ethics Officer (CEO).
• They must act across the entire AI lifecycle.
• They must make the process transparent and work organization-wide to implement the necessary changes.
• They must champion AI literacy holistically to ensure models reflect organizational values.
By investing in a responsible AI leader with a clear mandate, organizations can unlock the full potential of AI with controlled risk, driving innovation, and achieving a culture of responsible and transparent AI use, which leads to better decision-making and sustained business success.
The Rotary Four-Way Test as an Ethical Compass
The Rotary Four-Way Test is an ethical guideline designed to evaluate thoughts, words, and actions with these four questions: Is it the truth? Is it fair to all concerned? Will it build goodwill and better friendships? Will it be beneficial to all concerned?
In the context of AI regulation, this test can serve as a moral compass to guide ethical development and deployment, ensuring AI systems are truthful, fair, promote positive relationships, and benefit all stakeholders. It is advocated that AI regulation should align with such principles to achieve responsible AI innovation and use. Applying this test to AI governance is seen as a tool for decision-makers to ensure AI systems adhere to high ethical standards aligned with human values. A unified, ethical, and inclusive approach, guided by this test, is necessary for Nepal to harness AI for democratic resilience, social equity, and sustainable development.
Strategies for Ethical AI Governance
To navigate these complexities, several key strategies are essential. There is Need of establish ing a clear guidelines and real-world examples to translate ethical principles into practice through embedding ethical considerations in both business and public policies to balance innovation with accountability by providing digital skills and AI education, especially for underrepresented groups, ensuring inclusive public participation encouraging the development of energy-efficient AI systems and evaluate their ecological footprints. There is also a need to Strengthening collaboration through international bodies such as the UN, ITU, and WEF to create harmonized standards that uphold protection, innovation, and justice.
The government has approved the artificial intelligence (AI) policy, with the goal of creating an enabling environment for AI development, expansion, and safe use. The policy establishes institutional, legal, and regulatory frameworks for AI governance while ensuring its ethical, transparent, and inclusive use across all sectors. Under the policy, the government plans to make laws and standards for a secure and sustainable AI ecosystem and an organisational structure for research, development, regulation, promotion, and use of AI. The government will establish an AI Regulation Council, chaired by the communications minister and including secretaries from various ministries, to develop standards and regulations. A National AI Centre will also be set up under the communication ministry to manage and facilitate AI development in Nepal.
The necessity for integrating specialized ethics professionals and mandating ethics within educational curricula, particularly in Nepal, is a critical step that directly addresses the challenges outlined in the original text, especially concerning moral renewal in leadership and responsible AI governance.
The integration of ethics must proceed on two complementary fronts: Professional Practice and Education/Curriculum Reform.
Professional Practice: Integrating Ethics Expertise
There is a compelling need to formalize the role of professionals specializing in ethical practices across all major sectors. This involves establishing Ethics Advisory Groups: Integrating specialized groups of professionals (ethicists, compliance officers, legal experts, and social scientists) within government bodies, private corporations, and public institutions. These groups would serve an anticipatory function, moving beyond reactive ethics by pre-emptively assessing the ethical impact of new policies, technologies (especially AI), and business models. As an implementation strategy within the new AI Policy, the AI Regulation Council and the National AI Centre should be mandated to appoint and empower dedicated Ethics Champions. These individuals would be responsible for translating the Council's broad ethical standards (e.g., fairness, transparency, non-discrimination) into actionable technical guidelines and internal compliance protocols for AI developers and public service deployers encouraging professional associations (e.g., medical, legal, engineering, financial) to develop and enforce robust, contemporary codes of conduct that address digital-era dilemmas and hold members accountable for breaches of public trust, thereby rebuilding the ethical foundation lost through "deception and corruption" by leaders.
Curriculum Reform: Embedding Ethics into Education
To ensure a sustainable cultural shift toward integrity, systematic ethics integration into Nepal's educational curriculum is essential, as also noted in the government's AI policy regarding skill development from school to university levels.
Multi-Stakeholderism for Responsible AI
Responsible AI fundamentally means designing, deploying, and governing systems that align with human rights and democratic values. This requires a fusion of expertise from across the spectrum.
Key Mechanisms are transparency and Explainability (T&E) by Design. Technologists and HCI experts develop the T&E mechanisms (e.g., model cards, explainability tools). Affected user groups and civil society define what needs to be transparent and how it should be explained (i.e., in plain, accessible language), ensuring T&E is meaningful, not just technical including robust Accountability and Oversight. Legal experts and policy-makers establish regulatory frameworks (e.g., impact assessments, auditing requirements). NGOs and Civil Society often act as independent watchdogs, driving accountability by exposing ethical breaches and pushing for mechanisms of redress for those harmed by AI decisions.
Adaptive Governance:
AI evolves too quickly for static legislation. The multi-stakeholder model promotes adaptive governance (often called "soft law" or "sandboxes")—non-binding principles, technical standards (like those from ISO/IEC), and voluntary guidelines created by industry, academia, and intergovernmental bodies (e.g., UNESCO, OECD). This ensures ethical frameworks can evolve with the technology.
Inclusivity for Equitable AI
Equitable AI ensures that the benefits of the technology are widely distributed and that its harms do not disproportionately affect already marginalized groups. Inclusivity is the tool for achieving justice and fairness.
Key Focus Areas are tackling Algorithmic Bias and Data Gaps. Academics and Ethicists introduce concepts like intersectionality to governance, recognizing that discrimination occurs at the intersection of multiple identities (race, gender, disability). Stakeholders from the Global South and local communities challenge the dominance of Global North values in AI frameworks and provide the context-relevant data and knowledge necessary to build non-discriminatory models. "Nothing for us, without us" is the guiding principle, requiring the direct participation of those most impacted.
Promoting Access and Literacy:
Governments and NGOs work together to address the digital divide by promoting AI literacy and digital skills training, ensuring that the public, policymakers, and especially underrepresented populations, can engage with and understand AI systems.
Fostering Local Ownership:
In developing regions, an inclusive approach ensures that AI solutions are not simply imported but are tailored to local needs (e.g., using low-resource languages, addressing local infrastructure constraints) and rooted in local ownership and control.
Collaboration for Sustainable AI
Sustainable AI refers to the technology's long-term environmental, economic, and social viability, aligning with global objectives like the UN Sustainable Development Goals (SDGs). Key Aspects are ecological Responsibility (Green Compute), AI’s immense energy consumption is a sustainability concern. Industry and Academia must collaborate to develop energy-efficient models and measure the full environmental footprint. Policy-makers can establish regulatory incentives and standards, such as promoting the use of Green Compute Coalition initiatives to ensure AI growth aligns with climate goals.
Inclusive Growth and Labor Market Impacts:
Business leaders, labor unions, and policy-makers must collectively anticipate and manage AI's impact on the future of work. This includes joint efforts to fund reskilling programs, ensure fair labor practices in AI-driven workplaces, and distribute economic benefits equitably.
Global Cooperation and Shared Standards:
AI governance is inherently a global challenge. A multi-stakeholder model facilitates international collaboration through global bodies (UN, ITU, WEF) to create shared ethical guardrails and standards that can be adopted across national borders, preventing a fragmented and unsustainable regulatory landscape.
In summary, the multi-stakeholder, inclusive approach is the only way to harmonize the conflicting objectives of innovation (driven by business/tech) with protection (driven by civil society/law/ethics) and equity (driven by affected groups/NGOs). It replaces unilateral control with shared responsibility, leading to outcomes that are more legitimate, accepted, and durable.
Alignment with AI Governance
The government's commitment (Proposal stage ) to setting up the AI Regulation Council and the National AI Centre provides a unique opportunity integrating AI ethics into the curriculum directly addresses the policy's goal of developing human resources with the necessary skills, ensuring that future AI professionals are not just technically proficient but are also equipped to build fair, transparent, and inclusive AI systems. Public Trust must be built by aligning educational outcomes with the policy's ethical goals, society can move toward restoring the collective trust damaged by professional misconduct, reinforcing that technology will serve the collective good, not individual exploitation.
Comclusion
Ultimately, the convergence of ethical renewal in leadership and responsible AI governance offers a pathway toward restoring trust, strengthening human values, and ensuring technology serves the collective good. Only by aligning moral integrity with technological progress can society—beginning with the City of Temples—reclaim its ethical compass and move toward an enlightened and equitable future.