Artificial intelligence: a new privacy challenge.

The last decade has witnessed a profound transformation in the personal and professional sphere due to technological progress. This represents a paradigm change in terms of improved decision-making, productivity and efficiency in society.

In industry, many organisations use digital tools to coordinate tasks between their through, among others, the use of email, video calls or instant messaging. This is where Artificial Intelligence plays a pivotal role.

What is the Artificial Intelligence?

The Artificial Intelligence (hereinafter, AI) mimics human actions to perform tasks, i.e., a discipline that aims to create machines and systems capable of acting and reacting like a human being, simulating their behavior. Some examples of this intelligence can be found, inter alia, in autonomous cars, chatbots, virtual reality.

According to the last version of the Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) an AI System is defined as: “a system that is designed to operate with elements of autonomy and that, based on machine and/or human provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic and knowledge based approaches, and produces system-generated outputs such as content, predictions, recommendations or decisions, influencing the environments with which the AI system interacts”. 

The emergence of AI and related technologies may lead to major societal. The integration of AI into our daily lives can be a revolution in our history: the Fourth Industrial Revolution. As reported in the cientific journal Science: “As with all revolutions, it holds great promise as well as dangers. Outrage at large-scale privacy breaches demonstrates the perils of taking protection of personal data lightly and reminds us that technological progress challenges policies, values, and approaches to ethics”. 

What are the most recent legal news regarding AI?

Recently, the European Council has published the final version of the Artificial Intelligence Act, which was adopted by the European Commission with the aim of establishing harmonized rules on AI. The following are some relevant notes of the Regulation:

  • Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons areinformed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.
  • Real-time’ remote biometric identification systems should not be used lightly. Each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for the purpose of law enforcement should be subject to an express and specific authorisation by a judicial authority or by an independent administrative authority of a Member State.
  • Prohibition of AI systems that have a high probability of physically or psychologically harming citizens for purposes of subliminal manipulation, social sorting, or indiscriminate remote surveillance.
  • A classification of artificial intelligence systems around risk levels. The regulation follows a risk-based approach, differentiating between uses of AI that create (i) an unacceptable risk, (ii) a high risk, and (iii) low or minimal risk. The list of prohibited practices comprises all those AI systems whose use is considered unacceptable as contravening Union values, for instance by violating fundamental rights.

In addition, several additional requirements are established for high-risk AI systems such as:

  • A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems. The risk management system shall consist of a continuous iterative process run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic updating.
  • High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use. Human oversight shall aim at preventing or minimising the risks to health, safety or fundamental rights.

Furthermore, the Commission and national authorities will establishment of an EU-wide database for stand-alone high-risk AI systems with mainly fundamental rights implications. The database will be operated by the Commission and provided with data by the providers of the AI systems, who will be required to register their systems before placing them on the market or otherwise putting them into service.

How is data protection regulated in AI systems?

AI systems require vast amounts of data that could potentially include information relating to identified or identifiable natural persons and, thus, personal data. Therefore, when personal data is to be processed, such operations will be regulated by the General Data Protection Regulation (hereinafter GDPR), triggering the application of all its provisions including, among others, the principles of fair and transparent data processing.

The analytical powers of Big Data and AI can elaborate profiles and make automated decisions with the potential to affect the fundamental rights of individuals in relation to the processing of their personal data. Hence, the provisions of the GDPR relating to automated individual decision-making, including profiling, play an important role in this regard.

In brief, the GDPR will only apply when a decision is based solely on automated processing, including profiling which produces legal effects on the data subject or similarly significantly affects him or her in a similar way.

In this case, the data subject shall have the right, in addition to the right to object to such processing, to access the personal data being processed concerning him or her and to be informed about:

  • The applied logic. This can be understood as the information used in the AI system. For example, if a loan application is rejected, Article 22 GDPR may require the controller to provide information on how the system's algorithm has processed data subject’s personal data to make such an automated decision.  
  • The significance and expected consequences of the processing for the data subject. For example, the refusal of a loan.

The GDPR has modernized the EU regulatory framework, but international coordination is needed to seek legal interoperability across countries and regions.

Conclusions

The potential of artificial intelligence techniques offers an optimal opportunity to develop new technologies and ways of working that contribute to the development of humanity. In this way, there is a widespread belief that machines may pose a threat to certain jobs. However, companies and organisations shall leverage the opportunities offered by the AI. Among others, AI frees people from routine and repetitive tasks and allows them to spend more time on creative functions.

While it seems obvious that AI cannot replace a professional's empathy with their customers, it can help the latter to get to know them better in order to offer a better service. Therefore, organisations must train their co-workersto maximise the benefits offered by this new technology, but without neglecting legal and ethical compliance, which is pivotal to achieve a responsible and reasonable use.

Clibeanna
Artificial Intellience Proposal for a Regulation on Artificial Intelligence GDPR High-Risk AI Systems