As the EU approaches the enforcement phase of the AI Act, multilingual consistency becomes a significant technical topic.
Ensuring that key ideas retain stable meaning across 24 official languages and 27 implementation contexts presents a unique challenge that spans legal drafting, linguistics, administration, and AI systems.
Recent progress in Generative AI, especially multilingual models, now makes it possible to support this challenge in ways that were not previously feasible.
These models can assist in comparing expressions, detecting patterns, and highlighting areas where interpretation might drift.
However, in order for Generative AI to contribute effectively,
a technical framework is required—one that can organize meaning, guide comparison, and allow concepts to remain coherent across languages and jurisdictions.
The prototype below is offered purely as one such technical pathway.
1. Input Layer: Multilingual Parallel Expressions
The process begins by generating a broad set of expressions for each legal idea.
Rather than relying on a single translation per language, a 24 × 24 matrix can be constructed, including:
direct translations
reverse translations
cross-language expansions
contextual paraphrases
LLM-generated surface variants
The goal is not to decide which version is superior, but to create a sufficiently rich expression space from which deeper semantic regularities may emerge.
2. Extraction Layer: Identifying Meaning Patterns
With this matrix available, LLM-assisted comparison techniques can help identify:
common semantic anchors
patterns of intention that remain stable
context-dependent qualifiers
shared conditional structures
convergent interpretations across Member States
From these observations, a Meta-Concept Candidate can be proposed—representing a meaning pattern that holds steady despite linguistic variation.
This step does not require new infrastructure: existing multilingual LLMs and comparison methods can already support it.
3. Verification Layer: Consistency Boundaries
For reliable use, each Meta-Concept can be associated with a Boundary that describes:
what the concept includes
what it excludes
how it behaves across contexts
which interpretations may fall outside the stable range
These Boundaries can serve as technical instruments for observing:
sector-specific guidance
implementing acts
Member State procedures
future amendments or case interpretations
When an interpretation appears outside the Boundary, it can simply be flagged for optional human review—without automatically rejecting or labeling it.
4. Concluding Note
This prototype is offered solely as a technical perspective.
Its purpose is to outline one possible way in which Generative AI, when used within an appropriate framework, may help support multilingual consistency as the AI Act enters its enforcement phase.
If useful, elements of this approach may be explored in research settings or regulatory sandboxes.
- Prisijunkite, kad galėtumėte skelbti komentarus.