Dimitris Kollias, Research Fellow at ELIAMEP, argues that Artemis II signals the consolidation of a new space age in which the Moon is becoming a strategic, economic, and geopolitical frontier shaped by rival US- and China-led blocs, expanding commercial power, and growing competition over resources, rules, and orbital infrastructure. He contends that Europe remains relevant but structurally constrained by fragmentation and slow institutional adaptation, even as space is increasingly tied to security, competitiveness, and digital sovereignty. For Greece, he argues, this shift creates an opportunity to build selective strategic relevance through Earth observation, secure communications, maritime awareness, civil protection, and the integration of satellite infrastructure with sovereign AI and data-processing capacity.
Read the ELIAMEP Explainer here.
Traditionnellement pro-Fidesz, les Magyars de Roumanie ayant la double nationalité vont-ils voter autrement lors des législatives hongroises ? Coup de projecteur sur le vote des Magyars de Transylvanie lors des législatives hongroises du 12 avril prochain. C'est Silvia Marton - maître de conférences en sciences politiques à l'Université de Bucarest - qui nous éclaire sur les enjeux et les tendances.
- Articles / Courrier des Balkans, Populations, minorités et migrations, Orban Balkans, Politique, Roumanie, Une - Diaporama - En premier, Une - DiaporamaWritten by Tristan Marcelin.
Introduction Some historyThe concept of a regulatory sandbox already existed before the AI Act. According to Arto Lanamäki et al., it first emerged in 2016 with the United Kingdom’s financial technology (fintech) regulation. Studies suggest that regulatory sandboxes have reduced legal uncertainty and raised fintech venture investment. A 2022 EPRS publication also lists other sectors where regulatory sandboxes have emerged as test beds, including transport, energy, telecommunications and health. It adds that the UK and Norway have already established regulatory sandboxes for AI products. It also notes that the European Parliament has called for introducing regulatory sandboxes in several resolutions since 2019.
DefinitionAI regulatory sandboxes were first introduced in the proposal for a regulation on artificial intelligence (AI Act) published by the European Commission in April 2021. The final version of the AI Act, adopted in 2024, defines an AI regulatory sandbox as ‘a controlled framework set up by a competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real-world conditions, an innovative AI system, pursuant to a sandbox plan for a limited time under regulatory supervision’.
Benefits and risksRegulatory sandboxes offer three main benefits: they can help regulators develop better policies, innovators to develop compliant AI products, and consumers by bringing safer products on to the market. In a 2020 report, the OECD found they may facilitate dialogue between authorities and new players entering the market. Another report from the World Bank confirms these benefits based on its study of the fintech sector. However, the World Bank report also warns of implementation risks, where additional administrative burdens and lack of resources could outweigh the benefits.
AI Act regulatory sandboxes Obligations on Member StatesEU Member States are required to ensure their national competent authorities establish, or participate in, at least one AI regulatory sandbox, which should be operational by 2 August 2026. The AI regulatory sandboxes aim to improve legal certainty to achieve regulatory compliance, support sharing of best practices through fostering cooperation, innovation and competitiveness, contribute to evidence-based regulatory learning and speed up access to the single market. They are accessible on a voluntary basis and include specific measures targeted at SMEs and start-ups.
Implementation and coordinationThe AI Act established a hybrid enforcement system whereby the Commission and the European AI board assist Member States in setting up their AI regulatory sandboxes. National competent authorities are also obliged to coordinate with and report to EU‑level entities, produce guidance, supervision and support within the sandboxes, and facilitate cross-border cooperation. Meanwhile, the Commission is required to adopt secondary legislation that specifies how the AI Act is to be implemented and gives details of terms and conditions and how to access sandboxes. The European Data Protection Supervisor may also establish an AI regulatory sandbox for EU institutions.
Challenges DesignClaudio Novelli et al. describe three phases of regulatory sandboxes: pre-testing, testing and post-testing. Designing a sandbox involves defining the variables of each phase, such as the eligibility criteria (pre-testing), the level of realism and replication of oversight (testing), and the exit pathway and streamlined conformity assessments (post-testing). They believe the right balance must be struck between each variable to attract innovators and ensure compliance. For instance, eligibility criteria should permit different situations and lead to a tailored track when using the sandbox, since AI systems in early-stage development do not need the same support as those in late-stage development.
FragmentationThe rules for AI systems are enforced at Member State level through national authorities. While Member States must ensure that authorities have enough resources to set up and run their sandboxes, fragmented enforcement could result in some authorities receiving more resources than others, leading to uneven capacities. AI providers might therefore intentionally choose less stringent sandboxes, risking inconsistencies in the act’s enforcement.
TimeChallenges related to the design and fragmented implementation are compounded by additional time constraints. The AI Act provisions related to regulatory sandboxes will take effect from 2 August 2026. Since the Commission has not yet adopted any implementing acts providing guidance, Member States have to act independently to design their sandboxes, recruit and train staff, and build capacity.
State of play and next steps National implementationIn August 2025, Deirdre Ahern noted that out of the 27 Member States, only one – Spain – has an AI regulatory sandbox which is up and running. Five are actively implementing their sandboxes, four have declared their intention to do so and 16 have not yet communicated their plans. Spain seems to be the most advanced Member State currently, as its sandbox opened in 2025 and began hosting 12 high-risk AI systems. This initial experience enabled the Spanish authority, AESIA, to publish guidelines in December 2025 to support the implementation and compliance of systems with the AI Act. The act further obliges the Commission to develop a single, dedicated interface containing all relevant information on AI regulatory sandboxes to allow stakeholders to interact with them.
Secondary legislation and omnibusUnder the AI Act, the Commission must adopt implementing acts specifying how to establish, develop, implement, operate and supervise the sandboxes. In December 2025, the Commission published a draft version and requested feedback by January 2026. In the recitals of the draft, the Commission insists on the need to ensure consistent implementation of the rules. In addition to the implementing acts, a new regulation known as the digital omnibus on AI has been proposed by the Commission to amend the AI Act. The proposal suggests granting the Commission the right to create an EU‑level AI regulatory sandbox for AI systems under its supervision and strengthen coordination between national sandboxes. As of March 2025, the relevant European Parliament committees are engaged in examining the proposal.
Read this ‘at a glance’ note on ‘AI regulatory sandboxes: State of play and implementation challenges‘ in the Think Tank pages of the European Parliament.