Site icon aivancity blog

Regulating without stifling innovation: the dilemma facing emerging economies amid the rapid expansion of AI

By Dr. Tawhid CHTIOUI, Founding President of aivancity, the leading school for AI and data

Introduction

Artificial intelligence plays a central role in today’s global landscape, offering revolutionary potential in terms of economic innovation, social progress, and improvements to daily life. However, this technological revolution presents major legal and ethical challenges, particularly for emerging economies, which must not only bridge the technological gap but also develop regulatory frameworks tailored to their local realities.

Strict regulations such as the European AI Act, the U.S. AI Executive Order, and Canada’s Artificial Intelligence and Data Act (LIAD) represent advanced and detailed frameworks designed to protect citizens from AI abuses. These frameworks establish stringent requirements regarding transparency, accountability, and oversight. However, directly applying these models to emerging countries raises significant questions. Rigid Western rules could stifle innovation and may not be fully applicable given significant cultural, economic, and institutional differences.

This dilemma requires emerging countries to consider hybrid and innovative approaches that strike a balance between protecting citizens and adopting AI in a flexible and pragmatic manner. But how can emerging countries truly reconcile effective regulation with the promotion of innovation? What specific factors must be taken into account when defining an appropriate legal framework? How can they anticipate and manage the economic, social, and ethical impacts associated with the widespread introduction of AI in these countries?

A legal framework in the making: balancing rigor and flexibility

Currently, most emerging economies are still exploring the best approaches to regulating artificial intelligence, balancing regulatory caution with a drive to encourage innovation.

In Africa, the African Union has taken a significant step forward by adopting, as early as July 2024, a continental strategy focused on ethical and responsible regulation, while emphasizing the need for an approach tailored to the local realities of member countries.

This strategy includes a specific component dedicated to AI regulation and safety, while prioritizing a flexible approach that allows member countries to gradually develop their own regulatory frameworks tailored to their specific contexts. The document highlights several key aspects essential to the successful integration of AI in Africa. It places particular emphasis on the need to develop local human capital in order to build a skilled workforce capable not only of using but also of designing AI technologies that meet African needs and reflect African specificities. Furthermore, the strategy underscores the importance of improving digital infrastructure, such as internet connectivity and data centers, to ensure sovereign management of local data. It also encourages the revitalization of the AI-related economy, notably through support for innovative startups and the creation of a climate conducive to technological investment. Finally, it advocates for the establishment of sustainable regional and international partnerships that will enable African countries to benefit from the sharing of expertise and technology transfer while ensuring regular monitoring and evaluation of progress made. This strategy thus constitutes a balanced and ambitious model that fully integrates ethical, economic, cultural, and social considerations, ensuring that artificial intelligence makes a positive and sustainable contribution to the development of the African continent.

At the national level, legal frameworks governing artificial intelligence are often still in their infancy in many African countries. For example, Côte d’Ivoire is stepping up its efforts to combat digital disinformation ahead of the presidential elections, but does not yet have a comprehensive and structured framework for AI.

In Senegal, the political shift that took place in 2024 with the election of President Bassirou Diomaye Faye marked a new direction in technological development. The country abandoned the approach of the Emerging Senegal Plan (PSE) in favor of the “Senegal Horizon 2050” vision, centered on a structural, inclusive, and sovereign transformation. In February 2025, Senegalese authorities launched a new digital strategy titled “Technological New Deal,” with artificial intelligence as one of its pillars.

This strategy aims to integrate AI into public policy across all sectors, aligning it with national priorities: education, health, agriculture, governance, and entrepreneurship. It also calls for the development of a legal framework specific to AI, as well as a comprehensive reform of data protection, digital law, and cybersecurity. Particular emphasis is placed on technological sovereignty, the development of local expertise, and the promotion of African solutions based on Senegal’s linguistic and social realities.

Although still in the development phase, this strategy reflects a strong commitment not merely to endure the digital transition, but to steer it in an ethical and inclusive manner that is tailored to the country’s needs. Senegal thus aims to become a key player in the regional governance of AI in West Africa.

In Egypt, although initiatives such as the establishment of the National Council for Artificial Intelligence and the adoption of the Egyptian Charter for Responsible AI have been implemented to promote the ethical use of AI, the country does not yet have a specific national legal framework for AI. The Charter, adopted in 2023, aims to ensure the ethical use, deployment, and management of AI systems in Egypt by incorporating principles such as fairness, transparency, a human-centered approach, accountability, and security. However, the lack of specific AI legislation undermines the effective implementation of these principles.

These examples illustrate the efforts made by African countries such as Senegal and Egypt to integrate AI into their development strategies. However, the lack of specific and comprehensive legal frameworks on AI underscores the need for these nations to strengthen their legal infrastructure in order to ensure the ethical and responsible use of artificial intelligence.

Morocco, for its part, is taking a proactive and balanced approach: although there is not yet any legislation specifically dedicated to AI, the country relies on several existing frameworks, such as Law 09-08 on the protection of personal data and Law 05-20 on the cybersecurity of digital infrastructure. In May 2024, the Moroccan Minister of Justice announced the preparation of an ambitious bill aimed at specifically regulating AI and its uses, taking into account the potential challenges and threats associated with these technologies. This bill would include 17 articles covering personal data, governance (with the creation of a national committee dedicated to overseeing AI systems), and compliance (cybersecurity and privacy).

In addition, Morocco has strengthened its international commitment by establishing, in November 2023, an International Center for Artificial Intelligence under the auspices of UNESCO, with the aim of promoting AI in Africa through applied research, training, and local capacity building.

These initiatives clearly demonstrate Morocco’s commitment to developing an integrated public policy on AI that combines technological innovation, responsible governance, and respect for citizens’ fundamental rights.

In Asia, India is taking a flexible and adaptive approach, favoring sector-specific and incremental regulation. To date, India does not have a single overarching law dedicated exclusively to AI, preferring instead a pragmatic mix of policies, guidelines, and regulations tailored to local contexts and priority sectors. As early as 2018, the Indian government think tank NITI Aayog defined an ambitious “National Strategy for Artificial Intelligence” aimed at positioning India as a global leader in key areas such as healthcare, agriculture, education, smart cities, and mobility. Since 2022, this sector-specific strategy has been accompanied by a significant update to the overall regulatory framework through the “Digital India Act” bill, designed to regulate new technologies, including AI. Concurrently, a major law on personal data protection, adopted by the Indian Parliament in August 2023, provides for significant supplementary regulations to follow. In March 2024, the Indian government took a further step by requiring AI providers to obtain prior approval before deploying experimental models, in order to prevent discriminatory biases and protect electoral integrity ahead of the parliamentary elections. Finally, India’s active participation in the Global Partnership on Artificial Intelligence (GPAI, OECD, 2025) demonstrates a commitment to engaging in international efforts while preserving the national innovation capacity necessary for its technological development.

In December 2024, Malaysia launched a National Artificial Intelligence Office dedicated to AI policy development and regulation. This initiative aims to centralize AI-related efforts, provide strategic planning, promote research and development, and ensure regulatory oversight. Initial objectives include establishing an AI code of ethics, creating a regulatory framework, and implementing a five-year technology action plan through 2030. At the same time, Malaysia has formed strategic partnerships with major companies such as Amazon, Google, and Microsoft, which have invested in the country’s data centers, cloud infrastructure, and AI projects.

Singapore, which plays a leading role in developing governance and ethics guidelines for AI within the Association of Southeast Asian Nations (ASEAN), is actively collaborating with member countries to develop an AI implementation guide for the region’s public and private sectors. In 2023, Singapore updated its National Artificial Intelligence Strategy, originally launched in 2019, to reflect technological advancements and national priorities.

These examples illustrate the diversity of approaches adopted by emerging economies to regulate and promote the use of artificial intelligence, based on their national contexts and strategic priorities—approaches that are distinct yet converge toward a balanced regulation of AI. These emerging countries are actively seeking to strike a balance between the responsible adoption of legal frameworks inspired by international standards and careful respect for local cultural, economic, and institutional specificities. This constantly evolving approach is essential to enabling these countries to fully exploit the opportunities offered by AI while effectively protecting their citizens from potential abuses.

The Major Challenges Facing Emerging Economies in Regulating AI

Emerging economies face several major challenges in their efforts to regulate AI.

First and foremost, their limited institutional capacity poses a significant obstacle: judicial and regulatory institutions, which are often underfunded and lack technical expertise, struggle to develop and enforce appropriate regulatory frameworks, thereby paving the way for potential abuses such as algorithmic bias, excessive surveillance, or privacy violations. However, they can also present an opportunity to create hybrid, pragmatic, and innovative models, moving away from Western regulations that are sometimes too rigid—as seen in India, which favors sector-specific regulation tailored to the risks associated with each AI application.

Furthermore, the rapid rise of automation poses major social challenges, particularly in the area of labor law. Key sectors such as services, finance, and manufacturing could undergo profound transformation, threatening the jobs of the most vulnerable populations. Particular attention must be paid to the digital sector, where “click workers”—who are primarily responsible for data labeling—often work under precarious conditions and without adequate legal protection.

Finally, the issue of digital sovereignty is emerging as a fundamental strategic challenge. Emerging economies’ dependence on foreign technological solutions limits their control over critical infrastructure and sensitive data. To address this vulnerability, investing in autonomous local infrastructure—such as Morocco’s project for sovereign data centers and tailored language models—is a promising path forward. These initiatives not only strengthen economic resilience but also ensure better protection for citizens and their data, thereby laying the groundwork for effective digital sovereignty.

Conclusion: What kind of regulation for the future?

A balanced and tailored hybrid regulatory approach is the preferred path for emerging economies, combining protection, innovation, and regulatory flexibility. However, this approach must be accompanied by a thorough examination of the ethical issues inherent in AI. A robust ethical framework appears essential to guide innovation in AI. This framework should be based on universal principles such as transparency, fairness, explainability, fairness, and reliability. AI also requires clear mechanisms for independent oversight and auditing to quickly identify and correct any potential abuses. Emerging countries can draw on UNESCO’s international recommendations and the practices of the PMIA while adapting them to their specific socio-economic contexts to establish their own ethical charter.

Several critical questions remain unanswered: How can emerging countries ensure that the data used to train AI models truly reflect their cultural and linguistic realities? How can they avoid increasing dependence on technological solutions developed primarily in Western contexts? What institutional and international mechanisms should be considered to promote more equitable and inclusive cooperation in the field of AI?

These questions are shaping the future debates on AI, calling for a more participatory and inclusive global governance framework in which emerging economies play an active role in setting international standards and developing technologies tailored to their specific needs. The goal is clear: to work together to build a digital future that is truly equitable and beneficial for all.

References

African Union (2024). Continental Strategy on Artificial Intelligence.
OECD (2025). PMIA: Toward Responsible Artificial Intelligence.
UNESCO (2023). Recommendation on the Ethics of Artificial Intelligence.
ISED Canada (2024). Artificial Intelligence and Data Act (LIAD).
Africa Cybersecurity Magazine
CNDP Morocco (2024). Draft Law on the Regulation of AI in Morocco.
NITI Aayog (2023). Principles for Responsible AI in India.
ILO (2023). The Future of Work in the Face of Automation in Emerging Countries.
Médias24 (2024). Morocco Considers a Legal Framework for Artificial Intelligence.
Le Matin (2023). UNESCO International Center for Artificial Intelligence in Morocco.
AISigil (2023). National Strategy for Artificial Intelligence in India.
Trésor (2023). Digital India Act and Data Protection Act in India.
Siècle Digital (2024). Regulation of AI models by Indian authorities.

Exit mobile version