AI can predict your personality. Is this dangerous?

We all see AI progress all over the world. There are announcemets of advances here and there. Today I noticed an article AI can predict your personality simply by scanning your eyes, based on this research paper. I would take this as an example for discussion about AI regulation.

The research paper analyses eye movement and its relation to personalities. The study is based on obervation of 42 participants. Their data have been analysed using a machine learning approach (random forest classifier) to asses level of selected personality traits.  For this research, each pearsonality trait has been assessed on three levels (low, medium, high) by specified psychological test and by AI, i.e. by the random forest classifier. The results from AI have been compared with the results from psychological tests and the accuracy has been described by an F1 score, a particular measure of classification success. Based on the research paper, the used AI algorithm has mean F1 score of 40%-49% at best. Very roughly if means that the personality trait level has been porperly classified for slightly less than half of the participants. For sure this is better correlation than random choosing of the level, which would give F1 at 33%, and for sure this is relevant result.

Now, the other article says "AI can predict your personality simply by scanning your eyes". Nice, catchy title. For sure it attracts readers. Is it correct? Well, it depends for what purpose. With F1 below 50% there is certainly some level of prediction, but there is still high chance of missclassification. As with many reaserches, the results can help and can endanger. Fortunately, the article is correct when saying that  "On the more negative side, this discovery also screams of privacy implications." and that "Although these are fun (and freaky) things to think about, keep in mind that eye movement patterns aren't determinants for these character traits. These are simply tendencies that the researchers found had a correlation.". It is only at the end, but it is there.

This is how it works with AI: it is all about probability, about chance of right classification. And this is where we have to be careful. It would be not wise to prohibit such reasearch. The reasearch is helpful. It enriches our knowledge and it can be a base for a further research. What we need to be careful about is interpretation of the reuslts and prevention of abuse. The reasearch paper is actually sound in disclosing the relevant facts, including calculation of number of participants, written consent, compensation, used methods, approval by Ethics committee etc., so the reader can assess, whether the reasults are appropriate for a specific purpose..

Thoughts about "resposibility" of Artificial Intelligence seem premature in this light. AI does not classify personality in human sense. It just correlates data based on provided sample. It is an algorithm, a calculation with, once trained, predictable result. No magic behind: zeroes, ones, math, algorithm and result. The selection and configuration of the algorithm is kind of magic, but it is under control of the developer. The trained algorithm can be tested and its reliability can be calculated. No "black-box" excuses. AI is a product as any other, with all consequences.

The responsibility for outputs is shared between developers and users, as with any other product. They share responsibility for adequacy of the AI model for specific purpose, for ensuring reliability and for ensuring its compliance and should share the consequences of the decisions. As with other products. Same as with any other computer system, with any other product. It needs to be compliant with all current regulations, including antidiscrimination (no excuse for AI or black box, if it takes shortcut and judges a client based on race, nationality, religion, political believes, etc.), privacy (undisclosed psychological profiles, health conditions), security, etc. We need to protect customers: those who pay by money, those who pay by thier personal data or time, as well as those, who have no other choice than deal with AI because of dominance of the provider or mandatory use. All this without restricting reasearch and development.

Regulators can help the commercial sphere to deal with this responsibility by stating standards for training and for testing AI models, where appropriate. The standards can be differentiated by intended usage of AI model: more strict for healthcare, less strict for shops or social media, more strict when it comes to personality traits, less stict when it comes to wheather prediction, same as with other products. Having standards is on one hand an obligation, but on the other hand it is kind of excuse - the standards set baseline, what is considered sufficient care. Without a standard, any care might be not enough.

Statistics, ML and AI are powerfull tools and we should take care to not abuse them. It is right to make research compliant with applicable ethical and other standards. The real thread is in inappropriate use or abuse of the results. Humankind has already experience with abuse of statistics of observable human properties to personality, intelligence, somebody's options, somebody's life. We should take care to not repeat such mistakes.

Etichete
data protection regulation Artificial Intelligence GDPR machine learning Ethics psychometric profiling Machine Ethics

Comments

Profile picture for user baranja
Trimis de Jaroslav BARAN la Vin, 03/08/2018 - 14:55

You have made some very intersting points, Richard. It would be interesting to know your opinion on the limits that regulatory standards might set to innovation. This was one of the topics mentioned earlier here in the Alliance, but also within the discussions of the High-Level Expert Group on AI, that the European Commission has set up.

Kind regards,

Jaroslav

Ca răspuns la de Jaroslav BARAN

User
Trimis de Richard Krajčoviech la Mar, 07/08/2018 - 10:49

Thank you, Jaroslav. My view is that for AI innovation apply the same limits as for any other innovation, be it medical, food production, psychology, genetic manipulation, etc. There are plenty of examples how to deal with it, including controversional and boundary ones. In few words: I think that current measures should be sufficient, we just need to apply them properly. We shuld be very careful in relaxing them and only doing so after careful assessment of conseqences and benefits. Research should stay liberal, while producers and commercial users of AI products must bear primary responsibility for any damages. We should protect individual users adequately. Important is to prevent any legal subjectivity of AI or autonomous systems and we need to investigate manipulative techniques in virtual environment and their effect when used at scale. I go into more detail below. I would distinguish two major areas: 1. research and 2. innovation in commerce.

1a. Research dangearous to public, because of pollution,explosion, or other direct thread to humans, environment or property. This is generally regulated by liability requirements. Where the danger is by orders of magnitute higher than what institutions performing research can bear, like genetic manipulation was (and maybe still is), some biochemical research, etc., governments apply stricter rules and controls to protect general public. I think within AI we are not at this point. We might require some general precautions, which might be "kill switch". Catastrophic scenarios, like exploiting a vulnerability and consuming all critical resources by AI is basically not possible without human intention. Anyway, better control over ownership and use of computing resources and actors might be helpful to prevent this in future - if stealing of computing power, network capacity or unauthorised manipulation with actors would be a crime and oeprating systems will better protect this property and e.g. allow tracing of its use by others based on legislative requirement, then it might be useful precaution.

1b. Research dangerous to subjects: like testing of medicines, but also some psychological tests. Legally, this is again covered by responsibility of researchers with well developed legislation around. AI researchers just must become aware that they might cross boundaries and might be doing things which are unethical in the other area, like psychology or medicine. E.g. prdiction of personality is psychological research and must be compliant with relevant ethical standarts.

1c Research that might be abused: I am for liberal approach here. Potential abuse should not be limitation for research from two reasons: abuse of whatever (gun, electricity, computer, etc.) is problem of the user, not the researcher, and, when speaking about arms, for our own safety, we need to be on par with other powers. The control here should be on the side of commercial production and usage, as it is with e.g. chemicals.

2. Commercial innovation has long history as well and generally is governed by responsibility for damage in combination with legislation that governs areas where damage is hard to assess (marketing rules, data protection, anti-discriminatory, anti-trust, sanctions, etc.), which are considered to be in public benefit and where the damage is replaced by fines or licensing restrictions. For AI, all these regulations should apply as they are. What is new is how to apply them. The novelty is that a system can bacome e.g. rascist or use prohibited marketing techniques without intention of the developer or user. So basically we need to apply the existing legislation to this area, re-inforce the rules in the commercial sector.

For example use of AI in healthcare should go through same rigorous review as any other product, medicine or treatment method. We should distinguish here development of an AI model and usage of an AI model. Development of an AI model is a research, it is like development of a new medicine. The model should go through testing as likely as medicine and be approved or not approved for medical use. Once a model is developed, the owner of the model should have right to licens its use, so they can recover the research investment and make profits. Usage of unapproved AI models should be prohibited, as is use of any other unapproved medical equipment or medicine.

One real and new danger I see is using of manipulative techniques in the virtual environment, where developers can develop AI models that learn, how to manipulate people, how to achieve desired output at scale. These techniques are used by humans against humans for long time and there are measures that limit their usage. We do not know, what could be effect of their usage in virtual environment. We probably see some effects with recent scandals related to social networks. We need to analyse what is going on here and maybe create new limits on techniques, which are acceptable if applied individually but dangerous when applied in scale.

When it comes to "ethical decisions" by AI, like with automous systems, we should require producers of that systems to prevent such situations first of all. Any autonomous system should prevent ethical and moral dillemas in advance. How much prevention is enough might be discussed, but the conservative status is "as it is". Ethical and moral dilemmas cannot be solved by statistics, i.e. cannot be solved by artificial intelligence, simply because there is no agreed solution among humans. The rules differ from culture to culture, have deep history and nuances we do not understand well. If we do not understand them, we cannot test a system, how it deals with moral dilemmas. When there is a moral dillema, AI can be only in advisory role, unless there will be common agreement, how an autonomous systems sould solve some of them. And again, autnomous systems must not be responsible for their actions - this are the producers and users, who must bear responsibility for the activities of their systems. Good example are animals - owner of an animal is responsible for all actions of that animal and is required to make all precautions to prevent any damage. Owner of AI is responsible for its actions if they do not (inconsistency in the sentence edited on August 7, originally "until they") follow the user guide (and should be careful what they are using), or the producer is responsible, if there is a damage while the user followed the user guide. And if necessary, legislation can create meaningful exceptions, where reasonable and in public benefit.

Wat is critical from my point of view is avoiding "black-box" excuses and preventing any legal souverenity of AI. Comparison with legal entities is not correct, beacuse leagl entity is represented by physical persons and crime of legal entity is commited by a physical person (or group of them). A legal person cannot commit a crime or wrongdoing without an actigvity of a physical person. Legal person is not doing anything on its own. It is a legal abstraction over human activities. The directors, board or other physical persons are finally responsible for most serious actions of legal person. Giving any type of rights or citizenship to AI would be different - it would make nobody responsbile for its activities, which is totally new thing, it will be the act, that might lead us to huge problems, like whether we are allowed to switch it off.

 

Profile picture for user n0028kir
Trimis de Vladimiros Pei… la Sâm, 04/08/2018 - 11:36

We are more dangerous to our species than any AI. 

AI not only will be able to predict your character but even your next moves, "personal choices" (as we call movements influenced by an unknown source of frequencies) and more.

AI will be able to predict illegal activity and stop it before it happens. Is this legal or ethical? Well...it's complicated. Most AI philosophers and researchers would argue, being obsolete this should never happen. 

But, let's forget we are egoists for a moment. Practically if you see a fire starting in a forest, do you wait for it to burn the whole forest in order to deploy firefighting services? No. Same with humans. If you know someone is going to harm another individual human unit or mechanism of the big movement, you should stop him immediately. You can't wait for that person to kill someone only to say, "alright I have all the evidence now, I can stop him". 

Yes, we will have a huge issue with this, as the courts will be full of these kinds of cases, where "innocent" killers would swear they would never kill even if the machinery hadn't stoped them.

Thanks to AI again, we will eventually change the current "justice" system into an AI-powered system that will not care about your tears and lies. If anything, lying to it will only extend your sentence. 

The AI judge will know who is responsible for what. Courts will be taking place via a smart-hand held device or computer, and the criminal will be obliged to answer the call and interact with the machine. Avoiding the call will result in a penalty.

Of course, the concept of "jail time" will be also altered as in the future someone who is in jail will be practically walking free, only that he will lose internet access. It sounds funny nowadays, but give it some time and you'll experience it first hand. In a world where everyone is absorbed by his devices, the one who is not connected to the network will be left outside, no one will talk to him, neither he will be able to pay for his goods. He can't communicate with anyone, not even with family as everyone is communicating using the network he's left behind of. That will drive the first 2 generations of criminals "crazy". Then it will be as usual as having a person in a 1x1 cage with no light food water. I mean the concept of jail is natural to us right? Thanks to our adaptability, more changes are around the corner, and again, like every time, they will not benefit the scammers. 

While we are discussing about AI, the real autonomous intelligence is already working in harmony. 

 

Ca răspuns la de Vladimiros Pei…

User
Trimis de Richard Krajčoviech la Dum, 05/08/2018 - 21:07

Psychologists already predict psychical disorders, that might be dangerous to the pacients or to general public and we do not jail them, but we try to help them to give them the best life we can. It is not their fault. If an properly tested and reviewed AI model helps psychologists to discover dangerous behaviour, the better. Emphasis on proper testing and review, to avoid abuse of science for wrongdoing. As the basic example, if a group of people with some characteristic is more involved in crime, it does not mean all of them are born as criminals. And even if the correlation is extremly strong, like with some psychic disorders, such individuals need help instead of jail. There are other experts to define, what help or protection they need. Humankind has long history of wrongduing based on logic, science or beliefs that proved later to be wrong.

Ca răspuns la de Vladimiros Pei…

Profile picture for user n0029u2r
Trimis de Benjamin Paaßen la Mar, 07/08/2018 - 14:39

I would like to comment that the objectivity of AI systems should not be overrated. Indeed, AI systems can be easily biased by virtue of wrong model assumptions or flawed data. And AI systems will apply their bias without remorse and relentlessly. These kinds of errors are, at present, more realistic and, I would argue, more dangerous compared to dystopic scenarios of ubiquituous and all-knowing AI technology.