European Strategic Autonomy In The AI Race

Artículos, Geopolítica
Más del mismo autor
No se han encontrado resultados.
Más de la misma categoría

Strategic autonomy has become a central organising concept in debates about the European Union’s future geopolitical orientation, yet the concrete policy areas through which it should be pursued remain contested. Frontier technologies, and artificial intelligence in particular, constitute a critical test case because they underpin both economic competitiveness and national security capabilities. Specifically, AI is the transformative technology that is catching the attention of policymakers around the world since the release of ChatGPT in late 2022. The global contest for leadership in artificial intelligence is currently led by the United States and China, with a small number of technologically advanced middle powers occupying niche positions. By contrast, the European Union combines regulatory influence and strengths in specific segments of the AI value chain with a structural lag in frontier model development, fragmented investment, and a predominant focus on governance rather than capability-building. Still, it has some structural strengths, and in some cases even crucial bargaining powers. Given these dynamics, Europe must adopt a coherent and well-defined strategic posture to avoid being constrained by external competitors.

Why AI matters for Europe

Since 2018, the European Commission has launched the EU AI strategy to encompass all the future policies around the developing tech. But it was not until 2025 that the EU started talking about competitiveness and sovereignty, when it published the AI Continent Plan, outlining the geopolitical goals of the bloc. Though only in October 2025, with the Apply AI strategy, the EU directly mentions “external dependencies of the AI stack”. 1 This is the core of the current issue with the AI development direction of the world, and its implications for Europe. If Europe is dependent on external actors for a transformative technology like AI, the implications would be felt in all political aspects: from the economy to national security. On the security side, for example, Europe would risk depending on foreign powers for critical communication and data analysis systems. Furthermore, if advanced AI will revolutionise industry, and Europe will be dependent on foreign models, the competitiveness of Europe’s industry would be jeopardised. This is especially true since the EU is experiencing a productivity crisis relative to other powers, and manufacturing accounts for roughly 20% of EU GDP and up to 80% of its exports .2

Europe’s strategic assets

Europe is often portrayed as a passive actor subject to the technological breakthrough in the US and China in regard to AI. Still, what is less known, though increasingly acknowledged, is that the EU is an unmatched power, even a monopolist, in some crucial areas of the AI development cycle.3 The most important of all is ASML’s monopoly on EUV lithography machines. EUV machines are the first and most fundamental step to start training frontier AI models. Such machines use extreme ultraviolet light to imprint ultra-tiny circuits onto silicon wafers, enabling the production of cutting-edge, super-powerful chips used in AI systems. Without EUV machines, AI labs cannot build frontier AI models, since only the most advanced chips can sustain the enormous compute power needed to train frontier models. Europe also has a strong educational system which contributes to an excellent stream of AI experts. Europe has 30% more AI researchers relative to the population compared to the US, and its university network is constantly ranked among the best in the world, also for AI education.4 Lastly, the EU is still the largest trading bloc in the world; this grants it large market power, acting as an indispensable market for foreign AI companies trying to monetise their huge investments in AI models. This bargaining power allows the EU to force through regulations and impose conditions on foreign firms, such as higher degrees of localised data centres and investments. This is known as the famous Brussels effect, and it has already worked with previous digital policy pieces of legislation, such as the GDPR, which has become a trendsetter for data privacy management, among others. A similar effect could be reached by the full implementation of the EU AI Act, which could, in turn, indirectly expand Europe’s influence on external competitors.

Europe’s structural weaknesses

Despite the presence of specific areas of strength, the broader structural outlook remains challenging. Capital, computing power, and talent are still fragmented across member states and regions, which in turn increases the costs of any investment in AI since it lacks the economy of scale needed.5 If single states go on their own in funding their AI programmes, they risk duplicating efforts, as well as cutting out smaller nations from the race, which in turn will look at foreign off-the-shelf products to cover their needs. Capital markets fragmentation is the main reason why European start-ups often move, cease operations, or get acquired by US firms or funds. This in turn directly connects to the second structural weakness, which is the talent flight. At the moment the EU is a net exporter of top-tier AI talent; those profiles are the ones more likely to develop innovative products with direct economic benefits and business creation, but they often need large-scale, risky investments, conditions that are much better met in the US. At the raw resource level, the EU is very much dependent and uncompetitive in the field of computing power availability and energy costs. This computing power is a fundamental metric needed to develop advanced AI models. Simplifying, the more computational power is available, the more data can be trained on during the development phase. Such metric directly depends on the amount of advanced AI chips possessed, and those require massive upfront investments, as well as continuous ones since they tend to become obsolete quickly. Current estimates suggest that the European Union accounts for approximately 5% of global advanced computing capacity, while policy targets aim to increase this share to around 16%, roughly in line with the EU’s contribution to global GDP. At the same time, industrial electricity prices in Europe remain significantly above those in the United States and China, often up to three times higher, raising the operational costs of data‑centre infrastructures that underpin large‑scale AI training. These cost differentials weaken Europe’s attractiveness as a location for energy‑intensive AI investment and exacerbate existing scale disadvantages.

Defence as a test case: DAIDS and Sovereign Military Data

European AI sovereignty should not be achieved by solely catching up to competitors; rather, the EU should target its interventions in strategic sectors, securing critical infrastructures. This means sectoral policies in domains such as defence tech and critical infrastructure, while accepting some level of dependence in less sensitive areas like consumer applications. The Defence Artificial Intelligence Data Space (DAIDS) illustrates what such an approach could allow in the military domain. DAIDS is a proposed EU-wide, federated framework for the trusted and sovereign sharing of defence-related data, aimed at overcoming today’s national fragmentation in: command, control, communications, intelligence, surveillance, target acquisition and reconnaissance (C4ISTAR). It would strengthen interoperability among AI-enabled data-sharing platforms used by European armed forces, so that sensors and communication systems can operate across borders and services. It would set common technical and governance standards, as well as align with the EU AI Act and the EU Data Act. As a result, DAIDS would make the data backbone of European military communications both interoperable and sovereign.6 Furthermore, DAIDS aims to connect with EU industrial policy tools such as the European Defence Industry Programme (EDIP) and Security Action for Europe (SAFE) to create financial incentives for Member States to adopt the framework, for example through preferential loan terms or grant conditions for DAIDS-compliant C4ISTAR projects. Such a sector‑specific intervention does not require the Europeanisation of every layer of defence hardware; instead, it prioritises the digital infrastructure that structures how information is collected, processed, and disseminated across national armed forces. DAIDS therefore exemplifies an open sovereignty approach, in which the European Union seeks to retain control over a critical node in its security architecture, defence data exchange, while preserving openness to a diversity of technological solutions at other layers of the system.

Conclusion

Europe is rightly trying to enter the AI race; it is now developing the strategy it wants to choose to position itself among the global players. Nevertheless, it suffers key comparative disadvantages, from fragmentation to high energy costs, which will probably not go away in the short term. It is thus necessary to target policy intervention to secure critical infrastructures while remaining open to the rest of the world’s AI innovation and continuing to nurture homegrown capacities. Europe could even use its monopoly position in key parts of the AI cycle as bargaining chips to ensure its technological independence, fostering European “indispensability”. If a project like DAIDS were to be proven effective, it could work as a blueprint for other targeted actions in different areas, including the economy. Such open sovereignty approaches are flexible tools that adapt well to the current unpredictable times and needs of the EU.