How to address critical success factors for Artificial Intelligence

In their joint statement[1] of 31 May 2023, the EU-US Trade and Technology Council pointed out that „recent developments in generative AI highlight the scale of the opportunities and the need to address the associated risks“ [of this transformative technology].

During past months, we repeatedly saw announcements of significant investment plans of major players in the field. We took note of reports about serious flaws the large language models in their current shape would show. And we got warned by experts almost weekly calling for clear regulation of AI.

Time has come, obviously, to put focus on critical success factors for AI. Although any attempt to do so in advance must remain constraint, a number of such factors can be identified which have the potential to become triggers, or obstacles.  

Data and algorithm quality. When a chatbot generates wrong information, retrieves content without valid reference, or simply invents alleged factual ‚knowledge‘, there may be an issue of algorithmic deficiency in the applied model. However, even if that can be optimized, the issue of the quality of data used to train the system remains. As long as it is empirical data taken from reality, it may be imprecise, incomplete, or by itself wrong, causing the deficient output. Sourcing and utilization of data thus become critical factors of principle nature.

To address the substantial value of data sources, the most preferred concept is that of data spaces. Unfortunately, beyond the basic issue of data quality, it bears conflicts of interest such as availability and accessibility for competitive players. Favourable workarounds can be established though with so-called testing and experimenting facilities, where neutrality and independency can be ensured to overcome such conflicting interests. This requires appropriate set up as well as organization of these facilities.   

Risk governance. Among the risk categories for AI applications in general, most relevant in the case of generative AI are those of fraudulent use and desinformation. It is safe to expect no regulation whatsoever, be it ethically or legally, will be strong enough so that misuse of generative AI gets abandoned. Just the opposite, given how the social media and platform technology have unwillingly boosted new forms of crime, fraude and societal destabilization in the past, we should prepare for another wave of destructive innovation ahead of us, making use of generative AI. Critical here is whether individuals can be enabled to detect such misuse.

Two approaches appear to be most relevant.  One is laid down in the Joint Roadmap of the EU-US Trade and Technology Council as one of the roadmap activities, i. e. Monitoring and measuring existing and emerging AI risks[2] .The second originates a.o. from the EU AI ethical guidelines[3] requesting transparency and suitable labeling of AI systems. There is good reason to have both these means employed in regulative obligations for AI developers and suppliers, because otherwise the vast majority of citizens will have no chance of self protection and defence.

Human oversight. The large potential of AI goes hand in hand with profound uncertainty as to the impact it will have on well established practices in our life. It is hard therefore to forecast how far it may intrude into our individual or societal sovereignty. Like with any other technology, we will have to choose what we want it for, and which implications we will not accept. And this is not a question of choosing once for ever, but of ongoing reflection and adjustment. Critical here are the concepts of responsibility and accountability that govern our societies and are closely linked with human decision making based on human oversight and control. 

Europe responds with putting the human in the center of AI application (cf. Ethics guidelines referred to earlier). However, generative AI will inevitably disseminate across borders. Hence need will be to establish and harmonize global ruling on this field. Smart regulation will be key, along with global co-operation. For the time being, that seems to be wishful thinking. But the EU-US Trade and Technology Council could well become the nucleus of a global body that offers the right place to negotiate smart regulation of overarching relevance.   

____________________

Beyond the documents explicitly referred to, this text is based on papers published earlier and referenced in the documents section of this platform.

It was written by the author himself. No piece was generated by a machine based algorithm.

____________________

Norbert JASTROCH

eMail norbert.jastroch@metcommunications.de

____________________

[1]https://ec.europa.eu/commission/presscorner/detail/en/statement_23_2992 

[2]https://ec.europa.eu/newsroom/dae/redirection/document/92123

[3]https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

 

Taggar
Generative AI data quality Risk Governance human oversight