Prolonged human–AI interaction as a post-deployment governance challenge

Dear members of the Apply AI Alliance,
 

in the context of the implementation of the EU AI Act and the Apply AI Strategy, I would like to share a set of recent open-access preprints (January 2026) that I have made available on Zenodo.

The papers explore several under-discussed aspects of prolonged interaction with LLM-based systems and AI systems classified as high-risk and low-risk, with a specific focus on post-deployment effects. 

In particular, the work addresses:
 

  • Digital philotherapy: a low-intensity, non-clinical, non-substitutive framework for mental support in contexts with limited access to psychotherapy, inspired by DBT and bibliotherapy;
  • Human cognitive adaptation and cognitive safety behavior in high-risk AI systems, as a potential regulatory blind spot in the current formulation of the AI Act;
  • Deterministic governance architectures (SUPREME-1 v3, OMNIA-1, EROS-1), designed to support identity stability, proportional restraint, auditability, and post-market governance;
  • Cognitive Access Parity (CAP) as a principle to mitigate cognitive exclusion in low-connectivity or resource-constrained settings;
  • Failure modes in low-risk systems, including over-reliance, channel-induced exclusion, and temporal accumulation of risk.


All contributions are fully open access and intentionally adopt a non-anthropomorphic, risk-proportionate approach, with emphasis on human agency, auditability, and long-term governance, rather than performance or short-term outcomes.

Links to the main collection (recent uploads):


 

I would be very interested in discussing within the Alliance how cognitive and relational dimensions could be more explicitly integrated into sector-specific AI governance, particularly with respect to post-deployment monitoring and to AI systems formally classified as low-risk but capable of generating cumulative or delayed risks over time.


Feedback and discussion are very welcome, and I remain open to dialogue or collaboration.


 Kind regards,

Stefano Valente,MD


 

Taggar
Apply AI Strategy healthcare and pharma LLM discussion

Kommentarer

Profile picture for user n00lgn7a
Skickades av Stefano Valente den ons, 28/01/2026 - 09:19

For reference, below is the complete list of related open-access preprints archived on Zenodo.

The contributions address AI governance, prolonged human–AI interaction, cognitive aspects, and implementation issues related to the EU AI Act.


 

https://zenodo.org/records/18390878

(Failure Modes and Trade-offs in Low-Risk AI Systems)


 

https://zenodo.org/records/18390100

(Cognitive Access Parity in AI Systems – Technical–Ethical Addendum)


 

https://zenodo.org/records/18385137

(Large Language Models as Digital Philotherapy)


 

https://zenodo.org/records/18375149

(Human Cognitive Adaptation to High-Risk AI Systems under the EU AI Act)


 

https://zenodo.org/records/18369262

(Memory Without Experience: Human Memory and Large Language Models)


 

https://zenodo.org/records/18360968

(Unified Technical Addendum for SUPREME-1 v3, OMNIA-1, and EROS-1)


 

https://zenodo.org/records/18347345

(A Systematic Analysis of the EU Artificial Intelligence Act)


 

https://zenodo.org/records/18335916

(From Interpretation to Restraint: Biometric-Gated AI as a Deterministic Ethical Control Layer)


 

https://zenodo.org/records/18331459

(EROS-1: An Identity-Stability Kernel for Salience-Preserving and Risk-Proportionate LLM Interaction)


 

https://zenodo.org/records/18327255

(SUPREME-1 v3 Extension: A Defensive Evidence-Protection Package)


 

https://zenodo.org/records/18310366

(SUPREME-1 v3.0 — A Deterministic, Cryptographically-Auditable Governance Framework)


 

All materials are freely accessible (open access) and intended as background references for discussions on post-deployment governance, risk proportionality, cognitive adaptation, and relational dynamics in AI use, including low-risk and non-clinical contexts.

Happy to provide clarifications on specific points if helpful.

User
Skickades av Aureli SORIA-FRISCH den fre, 30/01/2026 - 09:10

Very interesting, thanks for sharing. Happy to discuss.

Profile picture for user n00lgn7a
Skickades av Stefano Valente den ons, 04/02/2026 - 09:01

Thank you, Aureli.
Happy to continue the discussion.
This contribution aims to open a space on long‑term affective effects of high‑intensity AI — an area where current governance tools are still limited.
Perspectives from the neuro‑AI side would be extremely valuable, especially regarding cumulative cognitive and relational shifts that emerge beyond deployment.