How far can public service tasks be delegated to AI?

By Franco Accordino, Head of Unit - Knowledge Management and Innovative Systems at DG CONNECT, European Commission

For many years, automation in public administration was synonymous with optimizing existing routine tasks, procedures and workflows. Today, with the massive explosion of data and the growing take up of AI technologies, the scope becomes much wider than office automation. It encompasses tasks that require high-level cognitive skills and which were for long time the prerogative of public servants, such as understanding texts, drafting summaries, recognising trends in data or supporting decisions. In future, we can imagine entire parts of the policy making lifecycle, from evidence gathering up to impact analysis, being facilitated by AI agents, or even be delegated to them.

As a public service, the European Commission is pursuing these developments, in line with the e-Government Action Plan. Here are a few illustrative examples.

1) Where does this document go?

Every day, public administration departments handle thousands of documents that, for transparency or auditing purposes, may need to be registered, categorised and filed into Document Management Systems. The decision whether or not a document has to be filed depends on the "importance" of the information therein, i.e. if it is used to support a policy decision, a legal or financial transaction, etc. Up to now, such a time-consuming and error-prone activity was done manually by employees on the basis of their personal assessment, because the relevance of documents is rather subjective and "undecidable,” i.e. cannot be decided "programmatically". As part of a corporate initiative on data, information and knowledge management, we are exploring the possibility to use machine learning and natural language processing algorithms to predict how a document has to be treated and advise public servants accordingly.

2) What is this text talking about?

Public services are more and more confronted with the need to quickly process large amount of data - evidence - to support their decisions. For instance, when preparing a policy proposal, the European Commission has to gather legal and scientific evidence on the subject matter, e.g. a competition case or a funding decision, which can result in millions of documents and terabytes of data. Furthermore, according to the "Better Regulation", the European Commission has to consult stakeholders through a structured survey  that can gather up to millions of inputs; for instance, the recent 'Summer Time' consultation received 4.7 Million replies. Such a wealth of data and documents has to be analysed and summarised in a short timeframe and be reflected into the policy proposal prepared by the public servants. Recently, such intellectual effort has been facilitated by Data Analytics Services (DORIS), a set of tools based on Machine Learning and Natural Language Processing developed by DG CONNECT to face the above challenges. The same tools have been re-used to address several other business needs such as to analyse the impact of large public investments in specific policy areas.

3) Are we ready to delegate more policy making tasks to AI?

A more long-term scenario sees Artificial Intelligence used even more systematically throughout the entire policy-making lifecycle. Our ability to gather accurate data from the real world will allow continuously feeding policy processes with fresh evidence about ongoing trends and events. For instance, we will be able to more easily evaluate the impact of previous policy measures, design new policies and test "in-silico" their effectiveness, discover unforeseen correlations between different policy domains and actions, etc.

As a consequence of such an "extreme" digitisation of policy processes we can envisage that certain decisions, merely requiring analysis of evidence, can be delegated to AI agents. For instance, an algorithm could decide whether the traffic in a city has to be limited to green vehicles, e.g. during peak pollution hours, if a certain public infrastructure requires maintenance or if economic policies need adaptation based on the variables and equations that describe those systems. In such a context, the quality of data as well as the transparency, robustness and liability of AI algorithms will become crucial.

However, as policy decisions rely also on factors other than factual evidence, there will be a need to build in "non-measurable" elements into the policy making model, for instance the legitimate interest of stakeholders; and the influence of ideologies or the emotional flows that characterise today's public discourse on social networks. In order to capture such type of knowledge, AI must be able to discern the rational, evidence-based and emotional components of policymaking. This can be facilitated by the introduction of semantic models, such as the Policy Making 3.0 model introduced here.

The greater human ability to interconnect with other peers, co-create, co-decide and theoretically have a say on any policy matter raises questions as to the possibility to make such processes "scalable" and sustainable for the common public interest.

Our ability to adapt policy-making processes to this new hyper-connected world, particularly the ability of AI to discern emotional and rational components, and to make sense of large conversations on social networks, will determine the accuracy and effectiveness of policy decisions and the robustness of democratic processes.

Finally and most importantly, we can delegate even more policy-making tasks to AI only by ensuring that the underlying technologies are trustworthy and embed the EU’s fundamental values. This requirement applies to AI in general, but it is even more crucial in such a complex application domain as policymaking.

Tagi
eGovernment evidence-based policy public services Futurium policy making

Komentarze

Profile picture for user n002daga
Zamieścił/-a Kai Salmela, czw., 01/11/2018 - 09:49

Very interesting.

One remark for this:  Natural language systems are currently differentiating languages dramatically.  Indo-European languages have an advantage over other languages, and even within that language group, Anglo germanic languages have a great advantage over rest of the spoken languages. 

As my native tong is Finnish, i'm very worried about this developent. AI should get very strong support for all languages, even for those that are not simple to program to code that AI understands. 

This is one of the reasons why Robocoast has done a proposition for EU AI Alliance HLEG that there should be an European Full Stack standard for an AI. Exchangeable language level would be a great step toward equality in the language issue.

 

wbr  Kai Salmela 

Robocoast , EU DIH

 

 

Dodane przez Kai Salmela w odpowiedzi na

Profile picture for user n0029u2b
Zamieścił/-a Piotr Mieczkowski, czw., 01/11/2018 - 22:26

Im still amazed that European Union, an union of 28 member states, is NOT a leader in chatbots, NLP.

Its amazing becasue when comparing with US and China, we as europeans use many languages and because of that we should be a leader in language translation and processing.

Unfortunatly we all sold apps to US. Did you know that real name of Amazon Alexa is Ivona or EVI? Amazon acquired polish startup IVONA Software.

https://techcrunch.com/2013/01/24/amazon-gets-into-voice-recognition-bu…

Still a lof of Alexa features are developed in Poland. 

Did you know that Apple Siri was developed also with help from frech citizens?

 

Definitly language should be a TOP Skill for AI startups in Europe. In alternative world we could all switch to Esperanto - a language created by polish-jewish ophthalmologist.

Dodane przez Piotr Mieczkowski w odpowiedzi na

Profile picture for user n002daga
Zamieścił/-a Kai Salmela, pt., 02/11/2018 - 13:26

You're absolutely right.

As far as i know , almost every speech recognising program has an European origin, so that leads me to think, that we do have this ability still. Where are these language professionals and where can we find them? 

I'm affraid that Esperanto does'nt really cut it, since the main benefit of languahe is for those who cannot use computers other ways, like elderly people. For them speaking chatbot, which can alert help if neccessary, would be a killer aparatus.  I think closest equivalent may be something like Google home at some areas. Needles to say , it does'nt work that way in most of the countries and non anglo-germanic languages.

And then we have those small languages, like Fenno-Ugric which has not yet a proper language recognition programs yet. It simply has proven to be a tough nut for the computer algorithms with the structure that is changing all the way depending of conjunction.

But i'm pleased to know that i'm not alone with these thoughts.

 

wbr Kai Salmela