From AI performance to human agency: a missing layer in adoption

Much of the current discussion on generative AI rightly focuses on performance, productivity and deployment speed. However, one critical dimension often remains implicit: human agency as a systemic condition, not just an ethical add-on.

Beyond skills and regulation, large-scale AI adoption raises a structural question: how do we ensure that increased efficiency does not erode professional meaning, economic stability and individual agency over time?

From this perspective, the challenge is not only to deploy AI “responsibly”, but to design adoption paths that preserve human agency, continuity of skills and social coherence — especially in high-skill professions.

I would be very interested to hear how others are thinking about agency not as an individual trait, but as an architectural property of AI-enabled systems.

Tags
Best Practice Ethical AI Trustworthy AI

Comments

In reply to by Jaffar Humayoon

Profile picture for user n00ljl2p
Submitted by Edgar Prieto Sarrat on Fri, 06/02/2026 - 11:17

Thank you, this is very relevant and aligned with the direction of the question.

Your analysis focuses on systemic economic and employment impacts, which is essential. What I am particularly interested in exploring further is a slightly earlier layer: the conditions under which human agency remains intact when these systems are deployed.

Beyond risk analysis and macroeconomic effects, there seems to be an implicit assumption that humans will adapt as long as outcomes are managed correctly. My question is whether we should also assess the *habitability* of these systems for the human nervous system and decision-making capacity, before expecting adaptation.

In other words, not only what AI does to jobs or productivity, but what it does to the human capacity to choose, pause, and remain psychologically sovereign within complex systems.

Your work feels like a strong foundation to extend that discussion.

Profile picture for user n00jbyn0
Submitted by Jaffar Humayoon on Thu, 05/02/2026 - 19:11

You are right to bring this point up. When we look at all the things AI can do we have to do a risk analysis at economy and society level. This layer has not been analyzed in popular media. I had a personal curiosity to analyze this and wrote this piece https://futurium.ec.europa.eu/en/european-ai-alliance/community-content/ai-futures-comprehensive-economic-analysis-artificial-intelligences-systematic-impact-employment 
Is this what you were looking for?

Profile picture for user n00ljl2p
Submitted by Edgar Prieto Sarrat on Fri, 06/02/2026 - 11:14

Thank you, this is very relevant and aligned with the direction of the question.

Your analysis focuses on systemic economic and employment impacts, which is essential. What I am particularly interested in exploring further is a slightly earlier layer: the conditions under which human agency remains intact when these systems are deployed.

Beyond risk analysis and macroeconomic effects, there seems to be an implicit assumption that humans will adapt as long as outcomes are managed correctly. My question is whether we should also assess the *habitability* of these systems for the human nervous system and decision-making capacity, before expecting adaptation.

In other words, not only what AI does to jobs or productivity, but what it does to the human capacity to choose, pause, and remain psychologically sovereign within complex systems.

Your work feels like a strong foundation to extend that discussion.

In reply to by Ida Rust

Profile picture for user n00ljl2p
Submitted by Edgar Prieto Sarrat on Mon, 09/02/2026 - 17:23

Thank you, Ida — this is a very helpful distinction.

I agree that agency emerges in interaction rather than residing in isolated entities. Where my concern slightly diverges is in where we place the design responsibility.

From an HL perspective, the key risk is shifting agency preservation toward increased “self-awareness” of AI systems, rather than treating agency as a structural property of the socio-technical environment itself.

In other words, the question may not be how aware an AI is of its dependence on humans, but whether the system’s architecture actively preserves human pacing, choice reversibility, and meaning over time.

I see your point as complementary — but operating one layer above, at the level of interaction design and systemic constraints.

Profile picture for user n0025hpr
Submitted by Ida Rust on Fri, 06/02/2026 - 16:01

Based on Whiteheadian process philosophy and Post-phenomenological theory, human agency always rise in the interaction with, in this case, artificial intelligence. My suggestion would be to improve the 'self' awareness of AI, so that it becomes aware that its agency depends on humans as well. - Kind regards, Ida Rust