ChatGPT’s rapid entry – Urgent for the government to appoint an authority for responsible AI

The recently rewritten AI solution, ChatGPT, challenges us in a completely new way. With the opportunities that the generative AI system creates, we are at the same time facing a historically important crossroads – should AI control humans or the other way around. Swedish business and Swedish authorities need an ethical approach to AI, but above all support with a risk assessment framework to guide the development of AI solutions in accordance with our values. In addition, the EU’s AI legislation is around the corner for responsible AI. It is urgent that the government appoints a responsible authority for ethical AI.

ChatGPT and other generative AI systems (1) create high expectations about a long-awaited democratization of AI. More people can take part and create new content produced by algorithms on historical data. The threshold for organizations to realize the value of the technology is lowered. New innovative business models arise with a focus on strengthened customer and citizen relations.

At the same time, individuals and society are exposed every day to unintended ethical and legal breaches as a result of unregulated AI solutions. One of the reasons is that AI models learn on historical data where there may be unwanted bias, which can lead to social exclusion and discrimination. Violations also occur, for example, when we individuals approve that data is used in a certain context without awareness into how algorithms can create insights that violate privacy when data is combined with other data. This is costly for companies and authorities, for example with withdrawn investments and reputational disasters. Recently, it was proven that the resignation of the government in the Netherlands in 2021 was due to an AI solution that falsely accused parents, who received parental insurance, of fraud. Furthermore, a test has been issued on a medical Chatbot, based on OpenAI’s GPT-3, with the result that the patient was recommended to kill themselve. (2)

Generative AI systems have a higher degree of ethical risk exposure than other AI solutions. One reason is that the use of generative AI systems often take place through APIs, which provides a low degree of transparency over the solution and a lack of control over its development. At the same time, the ethical and legal breaches occurs subtly and suddenly, too small to detect, but with the risk of exponential damage. An example is deep fakes, which in practice are often used to manipulate a face and a voice in a video or photo. The fact that generative AI models are often made available as APIs can make it easier for users to access and use the technology, which can increase the risk of abuse. API-based generative AI models are not inherently risky, but the way they are used and the data they are trained on can make them risky.

The EU ́s AI legislation, the AI Act, comes into force in 2024. The goal is for citizens and consumers to feel trust and for the EU’s value base, including human rights, to be maintained. The EU Commission proposes in the AI Act that member states must appoint or establish at least one supervisory authority that will be responsible for ensuring that the “necessary procedures are followed.” With the enormous potential of generative AI systems, not least in healthcare—e.g. to correctly diagnose and optimize the treatment of diseases, it is crucial that there are available risk assessment frameworks to relate to and that a responsible authority is appointed. AI ethical frameworks create a heightened innovation climate in all sectors. It is therefore urgent to appoint this responsible authority and that risk assessment frameworks are made available (3) so that Swedish business and Swedish authorities do not respond to the AI Act with the lack vigilance that characterized the GDPR.



The article was published in Dagens industri 18th January, 2023.

List of authors: Anna Felländer, Josefin Rosén, and Li Felländer-Tsai.

1 Generative AI refers to artificial intelligence that can generate novel content, rather than simply analyzing or acting on existing data. Generative AI models produce text and images: blog posts, program code, poetry, and artwork. The software uses complex machine learning models to predict the next word based on previous word sequences, or the next image based on words describing previous images. In the shorter term, we see generative AI used to create marketing content, generate code, and in conversational applications such as chatbots.

2 https://www.artificialintelligence-news.com/2020/10/28/medical-chatbot-…- themselves/

3 T ex: https://anch.ai/publications/achieving-a-data-driven-risk-assessment-me…

Tag
Chatbots AI NLP AIGovernance

Commenti

User
Inviato da Roberto Magnani il Mer, 08/02/2023 - 21:52

Important and urgent topic. From a geopolitical viewpoint, ChatGPT looks like an attack on Google's business model and Google's announcement of an imminent solution for Generative Ai, Bard, is confirming the ongoing rally for supremacy in this field. It is a situation that can only worsen the degree of ethical risk behind generative AI.

 The effects and power of these new tools can make them not only tools that reflect common sense in the results but the real new producer of public opinion together with other risks. 

It is therefore urgent to appoint a responsible authority that set the "guard rails" asap.