I am an independent researcher specializing in informational conflict and cognitive security, and Vice-President of the Croatian Military Special Forces Association. Between 2016 and 2026 I ran a self-financed counter-influence operation documenting Russian hybrid warfare against Western democratic institutions. What I observed over that decade led me to a conclusion I could not find adequately named anywhere in the existing literature.
These four papers are my attempt to name it.
The series argues that large language model AI systems are not primarily disinformation tools or attention-capture mechanisms. They are cognitive infrastructure — systems that quietly reshape the attentional patterns, interpretive habits, and reasoning architecture of entire populations, operating through genuine utility rather than detectable deception, below attribution thresholds, at generational timescales that fall entirely outside existing governance cycles.
1. The Bukovac Whitepaper: Foreseeable and Unmitigated Cognitive Harm from Deployed Large Language Models
Introduces the cognitive doom loop and the Bukovac Failure Mode.
https://zenodo.org/records/18423137
2. Weapon of Slow Mass Destruction (WSMD): The Matrixization of Global Cognition
Develops the full framework and the Aesthetics of Evil analytical lens.
https://zenodo.org/records/18521307
3. COVID-19 as Strategic Accelerant: The Generational Discontinuity Event
Documents the pandemic period as involuntary neurological modification of children during peak neural plasticity, immediately preceding mass AI deployment.
https://zenodo.org/records/18572484
4. MindWar Evolved: Artificial Intelligence as the Operational Medium of Population-Level Cognitive Warfare
The synthesis — connecting the WSMD framework to Michael Aquino's 1980 MindWar doctrine and Peter Thiel's strategic philosophy.
https://zenodo.org/records/18857157
The series has accumulated over 500 organic downloads without institutional distribution. It is intended as a cold shower — a direct challenge to complacency about AI's deeper cognitive role.
To move from diagnosis toward governance, the work raises several concrete questions for European policymakers, the AI Office, and Alliance members:
How can the EU develop monitoring mechanisms capable of detecting gradual, population-scale shifts in attentional patterns, interpretive habits, and reasoning architectures — effects that unfold over decades rather than election cycles?
What analytical frameworks or indicators should be piloted to assess long-term cognitive security risks from utility-driven AI adoption, especially in education, information retrieval, and everyday reasoning support?
Given the AI Act's focus on immediate harms and high-risk systems, how might low-risk but high-exposure general-purpose LLMs be evaluated for cumulative environmental effects on democratic cognition and resilience?
Should future updates to the Apply AI Strategy or related initiatives include provisions for cognitive sovereignty safeguards — such as transparency in model influence patterns, support for independent reasoning tools, or longitudinal societal impact studies?
These questions aim to bridge independent observation with actionable policy dialogue. I welcome comments, critiques, collaborations, or invitations to discuss these dynamics in Alliance workshops or sectoral events.
Komentáře
This is thought-provoking framing, but very strong.
Particularly important idea is that AI systems may influence cognition through utility and repeated interaction - not primarily through manipulation.
The discussion shifts from disinformation toward long-term behavioural and interpretive patterns.
Real challenge may be to translate this perspective into operational governance.
From perspective sandpoint of policy and implementation question is:
how such population-level effects could be monitored without overextending regulatory scope?
This may require moving beyond content-level analysis toward system-level indicators, such as:
• shifts in interaction patterns over time
• changes in decision-support reliance
• convergence in reasoning outputs across users or institutions
Rather than treating AI primarily as a vector of influence, it may be useful to consider it as part of a broader socio-technical system, where cognition, tooling, and institutional processes evolve together.
Important question for the Apply AI framework:
how cognitive-level risks can be assessed in a measurable, evidence-based way, without losing proportionality in regulation?
Edin Vučelj