Is Artificial General Intelligence the final goal that will change everything? If we find problems to keep AI discoveries and applications developed by humans under control (explainability, bias, privacy, etc.), let's imagine the situation in a near future when AI generation is automated.
Yes, AI is already able to produce AI by itself, even evolving by itself. It is possible today to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks. Sounds weird? Take a look at the paper "AutoML-Zero: Evolving Machine Learning Algorithms From Scratch", and later think about AI evolving during hours, days, months, ... and see the results: it took us 60 years to discover a thing which can discover itself. We have been focusing on how to achieve Artificial General Intelligence (AGI) by using and applying the same knowledge for years. It is possible that the time to speed up such a process is on, and maybe we have to share this task with AI. Otherwise we will need many decades (centuries? who knows) at best. What if I told you that a new AI research line thinks that the correct way is to let AI design the tasks that it has to solve ... New solutions for traditional tasks may appear, new points of view might emerge to solve problems. Take a look at Open-Ended Trailblazer (POET), and later think about self-generating AI.
AGI is the holy grail of AI. Nowadays we are far from this achievement, despite some timid progress in the last years. In fact there is a strengthening voice in the AI research community that has questioned
the capacity of the current state-of-the-art to lead the goal of human-like AI. However, recent discoveries have shown us a new way to keep pursuing this challenging task. AGI is full of questions, problems, challenges, trustworthy issues, ... regulations. Perhaps it might be time for us to begin to consider beforehand what we need to do when AGI arrives.