The market of artificial intelligence products, specifically in the realm of generative artificial intelligence, is primarily monopolised by a select handful of globally operating entities. This influential cohort, which includes behemoths such as OpenAI, Microsoft, Google, Meta, and a few others, boasts significant capital, unparalleled access to data, and a highly skilled technical workforce. These companies are the de facto driving force behind the development and ownership of the leading technologies in the field, spearheading everything from fundamental research to production and international distribution.
Such a monopoly presents a formidable challenge for Europe, which struggles to emerge as a leading powerhouse in this field. Despite being home to a considerable pool of high-level scientific and technical talent, the EU appears to lack the requisite conditions to bridge an increasingly wide technological gap. Our challenges are not merely financial in nature or restricted to access to the necessary computational power and datasets for training; in fact, they also encompass Europe’s ability to attract and retain talent that can compete on a global scale.
Europe’s Backseat Role in the AI Era: Concerns and Implications
This scenario yields a stark consequence – a significant technological dependence on non-European nations, leading to strategic and geopolitical disadvantages. Amidst the burgeoning era of artificial intelligence, the European Union risks relegating itself to a defensive stance, which could potentially impact its global influence and decision-making autonomy. Even within an established Atlanticist perspective, an envisioned scenario of Europe’s complete reliance on imported technology and skills in a sector as crucial as AI, is a vulnerability that cannot be countenanced. Holding the world’s most advanced regulatory framework for artificial intelligence bears scant worth if we are mere consumers of the technology we seek to regulate.
This sentiment is echoed by industry leaders. Recently, Sam Altman, the CEO of OpenAI, voiced his scepticism regarding the trajectory of the AI Act, specifically the high-risk classification attributed to foundational models. His remarks hinted that unless Brussels rethinks its stance, Europe risks losing access to pivotal tools such as ChatGPT.
A significant concern linked to the utilisation of “non-sovereign” generative artificial intelligence products is the safety of data, encompassing not just personal information but also strategic data. This concern is relevant both in the training and application phases. For instance, when a generative artificial intelligence model creates content based on personal or strategically important data, it introduces tangible risks, including data processing profiles, transfers outside the EU jurisdiction, and potential data leaks or breaches.
Diving deeper, the application of generative artificial intelligence in the sphere of cyber warfare is a well-documented subject. Within this context, GAI can serve as a potent tool to fortify both defensive and offensive capabilities, detect and neutralise cyber threats, bolster the protection of critical infrastructures, and formulate more robust prevention and response strategies. Given these implications, it becomes paramount to maintain absolute control over the entire supply chain associated with the GAI models utilised.
Protecting Cultural Wealth in the AI Era
Moving onto another significant concern, the preservation of cultural richness is paramount. Generative artificial intelligence systems, rooted in language models, can inadvertently erode cultural and linguistic diversity if they are predominantly developed and trained in English. In order to uphold digital sovereignty, it is crucial to engineer national or pan-European foundational models. These models should reflect and respect the linguistic nuances and cultural specificities inherent in each and every European country.
Charting a Seven-Step Path towards European Technological Autonomy
In order to redress the balance and foster a greater level of autonomy for Europe in the AI landscape, we must ensure rigorous EU oversight across all echelons of the AI ecosystem. This warrants the design and deployment of a multi-faceted framework, drawing parallels to a value chain assessment tool recently proposed by McKinsey in the field of generative artificial intelligence:
- Fostering Fundamental and Applied Research: The ethos of the Entrepreneurial State and mission-oriented innovation underscores the importance of public support for both basic and applied research. This is seen as crucial for any strategic initiative within the technology sector, with a particular focus on dual-use applications.
- Ensuring Production of Dedicated Hardware: This is a pressing challenge. The global semiconductor race offers a roadmap for Europe to follow. Europe must bridge the gap accrued over the years and cultivate the capability to manufacture vital hardware components, including graphic processing units (GPUs) and tensor processing units (TPUs), indispensable for the training and deployment of AI algorithms.
- Building Foundational Models: It is imperative to develop an indigenous European capability to create foundational artificial intelligence models, akin to GPT-4, BERT or DALL-E, while ensuring compliance with the regulatory framework established by the AI Act and the GDPR.
- Achieving Operationalisation: This step entails mastery of methodologies and paradigms such as MLOps (Machine Learning Operations) and AIOps (Artificial Intelligence Operations), which are vital for the successful implementation, management, and iterative improvement of AI models within an operational context.
- Developing Application Models and Services: Beyond creating foundational models, it is critical to design customised solutions and services that cater to the unique needs of public administrations, businesses, and end-users.
- Nurturing Skills: Expertise must be fostered in both STEM, legal, risk management, ethics, and sustainability fields. An openness to multidisciplinary collaboration is also crucial.
- Establishing Regulatory Sandboxes and Access to Capital: Last but not least, the EU ecosystem should provide for regulatory sandboxes for start-ups and facilitate access to both public and private capital.
Leveraging the Open Source Revolution
The journey towards European technological autonomy in the realm of generative artificial intelligence will call for a strong alliance with open source projects, tools, and culture. As a matter of fact, a substantial portion of the most advanced models today lean heavily on open-source platforms and are characterised by a reduced parameter complexity. This trait makes their training attainable even within an environment predominantly composed of small and medium-sized enterprises.
This favourable alignment with open source technologies promotes democratisation of AI development, allowing for a more inclusive participation from diverse entities – be it startups or established corporations. Moreover, it fosters transparency, accelerates innovation through collective intelligence, and assists in the mitigation of AI biases, thus encouraging ethical AI practices.
However, this path to autonomy in generative artificial intelligence, while promising, is laden with complexities and challenges. It is an endeavour that the European Union must confront head-on if it aspires to carve out and sustain a leading role in the global technological arena.
Yet, the challenge transcends mere technological dimensions. It is fundamentally an intersectional issue that is deeply entwined with political, economic, cultural, and identity aspects. Policy-making, economic investments, preservation of cultural heritage and linguistic diversity, as well as European identity as a whole all play crucial roles in this multifaceted journey