Closing the Implementation Gap in AI Governance — Why External Frameworks Are Essential for Operationalizing the EU AI Act Body

Recent findings from the Netskope Cloud and Threat Report: 2026 highlight a sharp increase in data policy violations associated with generative AI usage in enterprises.

According to the report:

  • Organizations now experience an average of 223 generative AI–related data policy violations per month, with the top quartile exceeding 2,100 incidents monthly.
  • These violations frequently involve regulated data, intellectual property, source code, credentials, and other sensitive information being submitted to AI tools.
  • A significant share of the risk originates from shadow AI, with nearly half of AI users relying on personal or unmanaged accounts for professional tasks.

    📌 Primary source:
    Netskope Cloud and Threat Report: 2026
    https://www.netskope.com/resources/cloud-and-threat-reports/cloud-and-threat-report-2026

Why This Matters in the EU Context

The EU AI Act has established a strong regulatory foundation for trustworthy and human-centric AI.
However, the evidence above reveals a growing implementation gap between regulatory principles and operational reality.

Three structural issues stand out:

1. Policies alone do not prevent violations

Even organizations with formal AI usage guidelines continue to record large volumes of data exposure incidents.

2. Traditional technical controls are insufficient

DLP and access-control systems focus primarily on where data moves, not on why humans decide to provide it to AI systems.

3. Human decision-making remains invisible to governance

The intent, rationale, and boundary judgments behind AI interactions are rarely documented, explainable, or auditable.

As a result, generative AI risk is not merely a technical or security issue —
it is fundamentally a decision-level and semantic governance problem.

The Role of External Frameworks

To address this gap, internal controls must be complemented by an external governance framework that operates before data reaches AI systems.

By “external framework,” this refers to a governance layer that structures and records human AI-related decisions independently of specific AI tools or internal security systems.

An effective external framework can:

  • Externalize human intent behind AI usage
  • Define semantic boundaries for what information may or may not be shared
  • Record decision rationales in an auditable, regulator-ready manner
  • Remain neutral to specific AI tools, including unmanaged or personal AI accounts

    Rather than replacing existing governance mechanisms, such a framework enables their enforceability and auditability in real operational environments.

Why an External Framework Approach Is Necessary

What current AI governance lacks is not regulation, but operational structure.

An external framework that systematically captures:

  • Intent — why AI is being used
  • Boundary — what is permissible to share
  • Rationale — how risks and alternatives were considered
  • Conceptual context — what the information represents

    allows organizations to govern AI usage at the decision point, not merely at the data-transfer point.

    This differs fundamentally from conventional DLP or policy enforcement approaches:

  • It does not only detect violations after the fact
  • It makes risky decisions visible, explainable, and governable before execution

Conclusion

The evidence presented in the Netskope Cloud and Threat Report: 2026 makes one point clear:
Generative AI adoption has surpassed the reach of existing governance tools.

To uphold the intent of the EU AI Act in real operational settings, governance must extend beyond internal policies and technical controls toward external frameworks that structure human judgment itself.

Such frameworks provide a practical path to:

  • Reduce unintended data exposure
  • Strengthen auditability and accountability
  • Support trustworthy AI implementation at scale

    This is not an alternative to regulation —
    it is a necessary complement to make regulation work in practice.

    How such external frameworks should be designed, standardized, or validated across sectors remains an open question — and one that merits collective discussion within the EU AI governance community.

Source (for reference and citation)

📌 Netskope Cloud and Threat Report: 2026
https://www.netskope.com/resources/cloud-and-threat-reports/cloud-and-threat-report-2026

Tags
AI Act AI Governance Trustworthy AI Data Leakage Generative AI Risk Implementation Gap Semantic Governance Auditability Cross-Border Governance Discussion

Comments

In reply to by JEREMY RUIZ

Profile picture for user n00krgn3
Submitted by Mototsugu Shiraki on Sat, 07/03/2026 - 08:12

Jeremy, thank you for raising this important concern.

Reading the Netskope report, one impression came to mind from my somewhat traditional experience in enterprise risk management.

Many of the situations described there seem structurally similar to classical asset governance issues rather than entirely new AI-specific governance problems.

For example, accessing external AI services from a corporate device can resemble accessing personal online banking from a corporate network.

If confidential corporate data can be freely accessed and uploaded into a personal AI account, the situation may resemble transferring corporate assets into a personal account.

In financial systems, strict policies, technical safeguards, and logging mechanisms have long been standard practice.

Today these mechanisms are rarely considered surveillance; they are simply recognised as necessary elements of responsible governance.

A similar logic already exists in European regulatory frameworks.

EU legislation such as the GDPR, trade secret protections, and confidentiality obligations already require organisations to protect sensitive information through appropriate technical and organisational measures.

From this perspective, the challenge highlighted in the Netskope report may not be entirely unique to AI.

It may instead represent the extension of familiar enterprise risk management questions into a new technological environment.

Your concern about operational drift is therefore very relevant.

However, addressing drift does not necessarily require surveillance of individuals.

What may be required instead is the establishment of clear governance boundaries around sensitive assets, combined with appropriate logging and auditability mechanisms.

When such boundaries are properly defined, operational drift becomes observable at the system and process level, without turning governance into continuous monitoring of individual behaviour.

A practical solution may therefore lie somewhere between today's still loosely governed AI usage and the much stricter control models historically developed in financial systems.

In this sense, the real challenge may not be AI itself, but how organisations extend existing governance principles for protecting sensitive assets into AI-enabled environments.

If it may be of interest, I also shared a related reflection on this topic earlier:

https://futurium.ec.europa.eu/en/apply-ai-alliance/community-content/sh…⁠�


 

In reply to by Mototsugu Shiraki

Profile picture for user n0076lhy
Submitted by JEREMY RUIZ on Thu, 05/03/2026 - 09:36

Mototsugu, thank you for this clarification.

Anchoring intent at the level of the Conceptual Business Process (CBP) is likely the most realistic approach.
It avoids both cognitive friction and individual surveillance, which often emerge when governance attempts to externalise intent at the employee level.

However, an additional operational challenge becomes visible.

As highlighted by the Netskope Cloud and Threat Report 2026, the adoption of generative AI is already occurring at a scale where behaviour evolves faster than governance structures. The reported monthly violations are likely only the visible signal of a broader systemic dynamic.

In practice, several mechanisms tend to unfold simultaneously:

  • users bypass official tools (shadow AI)
  • workflows evolve through automation and digital tooling
  • AI itself reshapes how operational tasks are executed

In this context, the issue may not be limited to semantic drift.

It may also involve structural drift.

Even when intent is correctly anchored at the CBP level, the actual execution of processes can progressively diverge from their conceptual definition, often without triggering any formal redesign event.

This is why your point regarding sandbox environments is particularly relevant.

Such environments should not only serve to validate governance models in theory, but also to observe how governance mechanisms behave when exposed to real operational complexity.

At scale, governance may therefore require two complementary layers:

  • a stable conceptual anchor (such as the CBP)
  • a systemic capability to observe drift over time, without turning governance into continuous monitoring of individual behaviour

The remaining question is therefore quite straightforward:

How can we maintain alignment between conceptual governance structures and operational reality when systems continuously evolve?

 

In reply to by JEREMY RUIZ

Profile picture for user n00krgn3
Submitted by Mototsugu Shiraki on Mon, 09/02/2026 - 07:00

Thank you for your thoughtful note — I largely agree with your observation.

I have also seen many real-world cases where business processes, once operationalized, gradually drift from their original design intent, or where formal models exist but are effectively ignored in daily execution. In that sense, the failure mode you describe is very familiar to me.

For me, however, the core issue is not whether drift occurs — it clearly does — but whether such drift can be recognized when it happens.

In the CBP approach, a Conceptual Business Process is registered and maintained in natural language using a C-I-B-R (Concept, Intent, Boundary, Rationale) structure, while BPMN models are treated as attributes rather than the primary definition. Business processes are therefore not assumed to evolve implicitly. Any material BP change is treated as an explicit review point, where the conceptual definition and its associated intent are re-examined.

In this context, I believe GenAI can play a useful supporting role: not by enforcing correctness, but by raising questions or flagging potential inconsistencies when observed execution patterns appear to diverge from the registered conceptual process, and by returning such cases to humans for judgment.

This does not imply perfect detection or automatic enforcement. On the contrary, the way processes, intents, and boundaries are represented must be carefully designed and thoroughly tested in practice. I also believe that developing a domain-specific, usable vocabulary — a controlled or semi-controlled language for each domain — is a necessary prerequisite for making this approach viable.

One reason I am cautiously optimistic here is the flexibility of generative AI in handling linguistic variation and contextual expressions, which makes it suitable for working with natural-language representations of process intent — provided that its role is constrained to surfacing issues rather than deciding outcomes.

Given the multilingual and regulatory complexity of real operational environments, I would very much welcome seeing this kind of approach tested in an EU context, for example in sandbox or pilot settings, where semantic drift, process change, and human oversight can be examined together under realistic conditions.

Without some stable conceptual reference, semantic drift does not disappear — it simply remains unseen.

I appreciate your perspective and see this as a shared problem space rather than a disagreement.

In reply to by Mototsugu Shiraki

Profile picture for user n0076lhy
Submitted by JEREMY RUIZ on Thu, 05/02/2026 - 21:36

Mototsugu, thank you for the clarification.
Anchoring intent at the level of the Conceptual Business Process (CBP) does indeed appear to be the right approach. It is likely the only one compatible with real-world adoption, as it avoids both cognitive friction and individual surveillance.

That said, a structural blind spot remains.

Freezing intent at design time relies on an assumption of stability that does not hold in practice. Business processes are organic and operationally fluid: they evolve continuously under operational pressure, tooling changes, and informal workarounds.
This creates a well-known failure mode: silent semantic drift.

A process may remain formally compliant on paper, while its real-world execution progressively diverges from the original conceptual intent, without triggering any explicit alert.

This mirrors exactly what we observed with static AI guardrails:
correct at design, but blind in operation.

CBP-level intent anchoring is therefore necessary, but not sufficient. Without a capability for dynamic drift detection, governance degrades into a set-and-forget mechanism.
Over time, this risk compounds especially in the absence of interoperability standards. Without a shared grammar of intent, semantic alignment cannot be compared, audited, or enforced consistently across organizations.

The core question is no longer where intent should be governed, but how semantic alignment can be maintained over time.
This is precisely where sandboxes become critical not to validate theory, but to test drift detection under real operational conditions.

This is not a disagreement.
It is simply the next physical limit of the model.

 

In reply to by JEREMY RUIZ

Profile picture for user n00krgn3
Submitted by Mototsugu Shiraki on Wed, 04/02/2026 - 14:19

Thank you, Jeremy. Let me clarify one additional but crucial point.

We do not externalize individual intent.
Instead, **intent is registered at design time within the Business Process itself**, as a justification for why that process may legitimately be executed.

More precisely, intent is embedded at the level of a **Conceptual Business Process (CBP)**.

Importantly, **the intent associated with a Business Process is an organizational intent** —
it is the responsibility of the enterprise as a legal and operational entity,
**not of the individual employees who execute jobs within that process**.

In this model, employees are not asked to assume or declare intent when using AI.
Their actions are simply positioned within a process step where certain **Intent Classes**, already defined and authorized by the organization, apply by design.

A Conceptual Business Process therefore defines:

* the legitimate business purpose (Concept),
* the set of structurally permitted Intent Classes,
* the applicable boundaries (data, role, scope),
* and the rationale explaining why execution is allowed under those conditions.

At runtime, no user is asked to declare intent, justify purpose, or select an option.
Intent is neither inferred from individual behavior nor enforced as an access condition.

This is not proposed as a mandated solution.

My position is that such structures should be **tested and examined by the actual stakeholders**
— regulators, providers, and deployers —
to assess *how* organizational intent, Intent Classes, and boundaries can be positioned and adjusted in practice.

This is precisely why I see **Sandbox environments as essential**:
not to validate a theory, but to enable participants **to examine the *how*** before enforcement is asserted at scale.
 

Profile picture for user n0076lhy
Submitted by JEREMY RUIZ on Sat, 24/01/2026 - 14:19

With respect, I would like to add a point of operational reality.

The issue is not whether to govern intent, but where that governance is applied.
In practice, frameworks that require employees to declare intent or justification before using AI do not scale beyond very high-security environments. In a standard European enterprise, this friction is quickly perceived as continuous evaluation of individual judgment.

Empirically, this is a constant: cognitive friction produces circumvention.
When governance is experienced as surveillance or additional mental overhead, risk does not disappear it is displaced into shadow AI, often far less auditable.

Individual accountability remains necessary, but downstream (training, culture, usage policies), not as an operational access condition.
At scale, effective governance must be systemic, contextual, and largely invisible to the user otherwise compliance itself becomes an operational blind spot.