AI as the “New Air”: Epistemic Asymmetry and the Case for a European Public Foundation Model
Introduction
In modern society, air quality is a public responsibility. No individual citizen can meaningfully inspect atmospheric chemistry or verify the origin of pollutants; instead, the state assumes responsibility for regulating and safeguarding a collective, technically opaque resource.
As foundation models become infrastructural components of digital life, a similar structural condition emerges. Implementers of agentic AI systems often lack meaningful access to the epistemic foundations of the models they deploy. Even where documentation is provided, the internal reasoning processes of large-scale models remain probabilistic, distributed, and non-traceable to specific training inputs.
This creates not merely an information gap, but a structural epistemic asymmetry.
The current regulatory architecture assumes that documentation and compliance obligations are sufficient to rebalance this asymmetry. This paper argues that such measures, while necessary, are insufficient. A shift toward state-guaranteed or publicly certified foundation models as infrastructural utilities deserves serious consideration.
1. The Structural Limits of Traceability
Article 53 of the AI Act requires providers of general-purpose AI models to document their training process and provide technical information to downstream actors.
However, access to documentation does not equal verifiability.
Even if a provider retained the entire training corpus—a practically and legally burdensome proposition—the probabilistic and distributed nature of neural networks prevents causal traceability between a given output and a specific data source.
Thus, the problem is not merely lack of data access. It is the mathematical structure of large-scale learning systems.
An implementer cannot realistically audit the ethical origin of a behavior by inspecting datasets, because:
- The model does not encode decisions in a linear, attributable form.
- Outputs emerge from high-dimensional statistical aggregation.
- Structural biases may be emergent rather than directly traceable.
The epistemic asymmetry is therefore structural, not accidental.
2. From Market Product to Public Utility
The AI Act currently preserves a separation between:
- Private model providers
- Public regulators and sanctioning authorities
This separation protects market pluralism and avoids state monopoly. However, in the presence of structural epistemic opacity, it leaves downstream implementers exposed to intrinsic model defects they cannot meaningfully audit.
If foundation models function as infrastructural substrates—embedded across economic, administrative, and social systems—it becomes reasonable to consider treating certain classes of models as regulated utilities.
Under such a framework, the state would not replace innovation, but would:
- Provide or certify foundational checkpoints meeting strict safety thresholds
- Assume primary responsibility for structural defects
- Maintain independent audit bodies
- Offer SMEs and independent developers a trusted baseline model
This approach mirrors how states manage other technically opaque infrastructures such as power grids or environmental safety standards.
3. The “Electric Wiring” Model of Layered Liability
A public utility model does not eliminate individual responsibility.
The analogy is electrical infrastructure:
- The state or regulated provider guarantees safe electricity generation and distribution.
- The homeowner remains responsible for faulty internal wiring.
Applied to AI:
The state (or certified public body) guarantees the structural integrity of the foundation model.
The implementer remains responsible for:
- System integration decisions
- Privilege assignment and autonomy thresholds
- Oversight mechanisms
- Application-specific harms
Responsibility becomes layered according to control and knowledge.
4. Addressing the Objection of State Impartiality
A common objection is that a state cannot simultaneously provide and adjudicate AI systems without compromising neutrality.
This concern is legitimate.
However, existing public utility models offer institutional solutions:
- Independent regulatory authorities
- Judicial review independent of executive bodies
- Mandatory insurance schemes
- Transparent public audit processes
The key is not monopolization, but structural accountability.
The question is not whether the state should own all AI systems, but whether certain foundational models should be treated as infrastructural baselines under public guarantee.
5. Leveraging Existing European Frameworks
The AI Act already signals movement toward systemic oversight through:
- Regulatory sandboxes (Article 57)
- Risk-based classification
- Obligations tailored to general-purpose models
Expanding this trajectory toward a certified infrastructural layer would not overturn the Act’s architecture; it would extend its internal logic.
If the Union aims to promote trustworthy, human-centric AI (Article 1), then it must confront the structural limits of private traceability.
6. Institutional Coherence: If the Union Builds One for Itself
A practical objection arises: Public-sector AI systems may differ significantly from general-purpose foundation models.
They may rely on:
- Sensitive administrative data
- Clearly delimited institutional purposes
- Narrow regulatory contexts
Therefore, it is not automatic that a public administrative AI could serve as a general-purpose model.
However, this objection dissolves once we distinguish between two layers:
- The foundation layer (core pretrained model architecture and baseline training corpus)
- The application layer (fine-tuning on sensitive administrative data and domain-specific constraints)
The foundation layer is structurally separable.
If the European Union develops or commissions AI systems for internal administrative use, it will necessarily invest in:
- Robust documentation of training procedures
- High auditability standards
- Traceable model versioning
- Institutional accountability mechanisms
- Long-term error logging and retraining processes
These requirements stem from democratic legitimacy and public accountability.
Once such a foundation model exists, the marginal cost of making its baseline version available to European SMEs, researchers, and developers would likely be minimal.
The Union would already have:
- A documented training pipeline
- A compliance-ready model
- Safety-evaluated checkpoints
- Governance mechanisms for updates
Extending the non-sensitive foundation layer to the broader European ecosystem would therefore not imply duplicating costs, but leveraging an existing certified infrastructure.
This is not the creation of a monopoly.
It is the recognition that:
If the Union must solve the problem of reliable, auditable AI for itself, it may already be building the safest possible baseline for everyone.
7. Economic Rationality and Strategic Autonomy
A European public foundation model would also:
- Reduce structural dependence on non-EU providers
- Lower entry barriers for SMEs
- Provide a trusted baseline for agentic experimentation
- Strengthen digital sovereignty
Rather than replacing private innovation, it would create a stable infrastructural substrate upon which market actors could build differentiated applications.
The proposal is therefore not ideological.
It is infrastructural and economic.
8 Public Neutrality as a Trust Multiplier
An additional consideration concerns the comparative willingness of private actors to contribute to a public infrastructure rather than to a private operator.
While participation incentives cannot be assumed, a public European foundation model may offer a structural advantage in one specific respect: competitive neutrality.
Contributing to a privately operated foundation model often implies strengthening a direct or potential market competitor. Large AI providers are vertically integrated entities whose business models depend on downstream commercialization of the same technological infrastructure.
In contrast, the European Union is not a market competitor in commercial AI deployment.
A properly designed public AI infrastructure would not:
- Monetize downstream applications
- Compete in product markets
- Exploit proprietary data for private gain
Under strict institutional separation between regulatory authority and model governance, a public foundation model could therefore function as a neutral substrate rather than as a strategic rival.
This neutrality may lower perceived competitive risk for contributors, especially SMEs that would otherwise reinforce dominant global actors.
However, this trust advantage is conditional.
It depends on:
- Clear legal firewalls between regulatory and infrastructural functions
- Transparent governance structures
- Binding limitations on data reuse
- Independent auditability
Public status alone does not generate trust. Institutional design does.
If these safeguards are credibly established, a European public foundation model could operate not merely as a technological artifact, but as a stabilizing institutional layer within the AI ecosystem.
Conclusion: The Digital Atmosphere
As AI systems become ambient and embedded, foundation models increasingly resemble environmental substrates rather than discrete consumer products.
When even full dataset disclosure cannot meaningfully guarantee causal traceability, the burden placed on downstream implementers becomes structurally disproportionate.
If the European Union must, for reasons of governance and democratic accountability, develop reliable and auditable AI systems for its own administrative use, it will necessarily construct a compliant foundation model architecture.
Making the non-sensitive foundation layer of such a system available as a certified public baseline would:
- Align responsibility with epistemic capacity
- Reduce asymmetry between large providers and smaller implementers
- Promote innovation within a trusted infrastructure
Just as citizens rely on publicly regulated environmental standards, future digital actors may rely on foundation models whose structural safety is institutionally guaranteed.
Not because the state replaces the market, but because infrastructure precedes competition.

- Log in to post comments
Comments
This contribution proposes a discussion on whether the European Union could consider treating foundation AI models as part of public digital infrastructure.
The article examines the structural asymmetry between large-scale model developers and downstream deployers, particularly in light of the AI Act’s documentation and accountability requirements. It explores whether a publicly guaranteed baseline model could reduce systemic risk, improve auditability, and strengthen competitive neutrality within the European AI ecosystem.
The objective is not to advocate for a predetermined institutional outcome, but to open a debate on governance design, incentive structures for data contribution, and the alignment between legal responsibility and epistemic capacity.
Feedback from technical, legal, economic, and policy perspectives would be very welcome.
- Log in to post comments