Is Artificial General Intelligence the final goal that will change everything? If we find problems to keep AI discoveries and applications developed by humans under control (explainability, bias, privacy, etc.), let's imagine the situation in a near future when AI generation is automated.
Yes, AI is already able to produce AI by itself, even evolving by itself. It is possible today to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks. Sounds weird? Take a look at the paper "AutoML-Zero: Evolving Machine Learning Algorithms From Scratch", and later think about AI evolving during hours, days, months, ... and see the results: it took us 60 years to discover a thing which can discover itself. We have been focusing on how to achieve Artificial General Intelligence (AGI) by using and applying the same knowledge for years. It is possible that the time to speed up such a process is on, and maybe we have to share this task with AI. Otherwise we will need many decades (centuries? who knows) at best. What if I told you that a new AI research line thinks that the correct way is to let AI design the tasks that it has to solve ... New solutions for traditional tasks may appear, new points of view might emerge to solve problems. Take a look at Open-Ended Trailblazer (POET), and later think about self-generating AI.
AGI is the holy grail of AI. Nowadays we are far from this achievement, despite some timid progress in the last years. In fact there is a strengthening voice in the AI research community that has questioned
the capacity of the current state-of-the-art to lead the goal of human-like AI. However, recent discoveries have shown us a new way to keep pursuing this challenging task. AGI is full of questions, problems, challenges, trustworthy issues, ... regulations. Perhaps it might be time for us to begin to consider beforehand what we need to do when AGI arrives.
- Inicie sesión para publicar comentarios
Comentarios
One aspect of the problem might be to look at the requirements to enforce on the AI systems. There is already a strong research community working on requirements and how to fulfil them. It might be the time to get some insights from here:
https://futurium.ec.europa.eu/european-ai-alliance/open-discussion/requ…
- Inicie sesión para publicar comentarios
Really appreciate this post and the reminder that “hard to control narrow AI” + “self-generating AI” is already a serious governance problem before we ever reach AGI.
One angle I’d like to add, from work I’ve been doing as an independent researcher, is that control is not only a technical issue (alignment, verification, etc.) but also an ontological one:
- What is the system really doing? (its nature)
- What does it present itself as to users? (its representation)
- How big is the gap between those two – especially when people start treating it as a “someone” rather than a tool?
I call this Reality-Aligned Intelligence (RAI): an attempt to shrink the gap between a system’s nature and its representation, and to measure the risks of anthropomorphism, artificial intimacy and over-trust along the way. Even without AGI, we already see these dynamics in “companions”, tutors and “therapy-like” chatbots.
For anyone interested, I’ve sketched this in a few open-access papers:
- Reality-Aligned Intelligence (RAI): A Metaframework for Ontologically Honest AI Systems – DOI: 10.5281/zenodo.17686975
- Reality-Aligned Intelligence (RAI) Governance & Ecosystems – DOI: 10.5281/zenodo.17691268
I think this could complement the requirements / safety work you mention: alongside capability-level controls, we need safeguards on how systems present themselves and how humans are likely to relate to them – long before any putative AGI appears.
Happy to compare notes with anyone working on AGI requirements, safety or oversight – especially around anthropomorphism and relational drift.
Niels Bellens
Independent researcher – AI, youth & mental health
ORCID: 0009-0008-1764-4108
Email: niels.bellens@proton.me
- Inicie sesión para publicar comentarios