Should we welcome and trust the Artificial Intelligence (AI)?

It depends whom you ask [1]:

  • For industrials driven by money, they would say ‘Yes’.
  • For end-consumers driven by privacy, they would say ‘No’.
  • Others would say ‘Don’t know’.

We know that the AI could bring health and prosperity to anybody who is willing to use it. Why such divisions?

The problem was and still is the science-fiction movies, which many transliterated into reality, and the most obvious the data leakage into the wild during these last two decades, and still going on. We know that the press likes sensational articles and documentaries. These media instill fear on the people who are neophytes.

The GDPR was created to avoid such leakages, where it became partly successful, but they forgot the manufacturers of information technology devices and software where the minimum, or none, is invested in security tests, i.e. holes in their firmware which are exploited by rogue hackers and governments. A legitimate question comes to the fore: ‘Where is the responsibility of the manufacturers?’



Where should the AI be regulated?

Many speak about high-risk zones, but what are these? Just to mention some:

  • Militarized weapons, the so-called ‘Lethal Autonomous Weapon systems (LAWs)’ [2] (It was a big mistake not to include the military AI in the EU commission guidelines!)
  • Health industry, e.g. hospitals, social securities, private physicians, …
  • Banking industry, e.g. financial reports, credit card information, …
  • Justice industry, e.g. information pertaining to children
  • People’s personal data, e.g. photographs, documents, recorded conversations, …

You might say that these areas are already covered by the GDPR, but let’s assume that a minor wants to disclose private information about himself or the family, and asks the AI to do it. Here lies the problem, it was authorized by a human. A child is not aware of the dangers by disclosing information to the public.



Will people trust the EU commission on the AI guidelines?

Very difficult to tell. According to many (news articles were written), the commission and other politicians, were “inactive” for a long time, not listening to the European citizen, … Just these last years, European politicians “woke up”. Many still do not trust the GDPR, despite it being transcribed into law. This untrust still can be seen through the data leakage and corporations going “unpunished” with their lame excuses (monetary fines do not solve the problem).

I think that the commission will have a hard time.

Trust must be earned. It is not acquired.

 

I would like to hear your thoughts about these ideas and experiences (mostly experienced myself).

 

[1] We did a non-scientific survey about EVA Smart City project, we got similar responses, despite people being told that the AI with the inhabitants’ data would be self-contained and only accessed if a legitimate court warrant was issued or in case of an extreme emergency, e.g. a molested child or her/his disappearance, or someone in danger.

To know more about this project, follow the hashtag #EVASmartCity on LinkedIn (be aware that these articles and posts are outdated, and our proposal was mainly refused by politicians and corporations who “love the status quo”, about climate change).

[2] Wikipedia, 2020

Tagi
trust privacy Commission data privacy data EU health banking AI Government Justice Artificial Intelligence hacker politicians military

Komentarze

Profile picture for user n0036syy
Zamieścił/-a Vasco Gonçalves, sob., 22/02/2020 - 09:41

Yesterday evening (February 21, 2020) had a nice debate about AI with friends, including some of their children (they are on vacation). It is a hot topic.

The conclusion was the same as indicated in the beginning of this article – a majority said: “Don’t know.”

Some of the teenagers said that for sure they would welcome 100% the AIinto their homes, despite parents being opposed. Even one parent arguing, with right, that the regulation on pornographic sites showing “I’m more than 18 years old” is not working. Children have access to it by just clicking on the banner – no results.
So, what will happen with the AI and minors? The same banner, before sending private data into the wild?

I find the academic papers and white papers fascinating, but these are far away from the realities of life. To be honest, how many academics have spoken informally to others about the AI subject, or even other subjects, and included in their reports?
They do it through official channels by presenting them a set of questions where people answer more or less the truth.

We must get out there and have real informal talks with the people, not just academics also politicians. Just there we get a sense about what people think about this topic, among other topics.

Coming back to children, if parents want to protect their children, they must give up certain privacy data, for instance monitoring children’s activities, giving their identification data to a secure site then redirect to certain websites. I would recommend and made compulsory the AI, also pornographic sites, to be registered on this secure site.
I know parents would be against monitoring their children, but we should not forget that they are responsible for their activities. We must instill that into their minds. This must be done in a practical way.

User
Zamieścił/-a Matthieu Vergne, sob., 22/02/2020 - 19:58

I don't think that "if parents want to protect their children, they must give up certain privacy data". Regarding the banner example on +18 website for instance, parents can setup the device of their children on "children mode" such that the browser can spot that a website is asking for +18 authorization and just proactively leave the website rather than wait for an answer. This particular example can be dealt with a simple technical solution without having to consider any personal data. The only point from a political perspective is to push for implementing thiese kinds of solutions. The problem is that politicians prefer to push for solutions that companies can gain money from rather than actual solutions to actual societal problems.

The point is really more about pushing for the right solutions to be implemented rather than a tradeoff with privacy.

Profile picture for user n0036syy
Zamieścił/-a Vasco Gonçalves, sob., 22/02/2020 - 20:37

The problem we have is that many parents are not tech-savvy as many of us are.

The majority want things as simple as possible. Nobody wants to fiddle around and reading manuals. How many manuals do people read when they buy a car? Very few. That’s the reason why designers tend to simplify interactivity with appliances (still the majority fail).

Many mobile users, even do not know how to operate properly their devices. I consider myself a tech savvy person (since my younger age), but still I have to look around to setup something more complex – takes time.

Can you imagine some busy parents doing that?

Politicians may enforce such systems, but very few will use it, not because that they do not want it rather they do not have the time and the desire to do it.

Often both parents work. Coming home, caring about the household, the children, … thereafter putting them in bed, what is their desire?

To have their peace for the rest of the evening.

Dodane przez Vasco Gonçalves w odpowiedzi na

User
Zamieścił/-a Matthieu Vergne, ndz., 23/02/2020 - 13:00

The issue of difficulty to use is a matter of design. Once again, in what I suggest, parents only have to set up the device in some kind of "children mode". That is basically a password protected setting to check, no more. It can even be set by the vendor when buying the phone. The rest is about regulations and technological groups to take care of. Parents don't have to care about the technical details.

So although I agree with the argument, here it is just irrelevant, because this argument can only be applied to poorly designed solutions, not to any technical solution.

Dodane przez Matthieu Vergne w odpowiedzi na

Profile picture for user n0036syy
Zamieścił/-a Vasco Gonçalves, ndz., 23/02/2020 - 21:51

Passwords should not be used at all! People in general put any kind of passwords, e.g. birthdays, 'love you', the cat's name, ...

Few years back, by chance I saw a client put a password - the name of his daughter.

You may tell people that they have to add symbols, numbers, or whatever, but they always come back to simplistic passwords.

We have to go away from passwords and embrace biometry or other means (coming back to the GDPR). The devices should be designed to self-contain the data and not it loose like in datacenters.

This is a big problem of poor design. I am afraid that the AI or 'robots' will have the same problem.

Dodane przez Vasco Gonçalves w odpowiedzi na

User
Zamieścił/-a Matthieu Vergne, pon., 24/02/2020 - 22:41

Password or not is irrelevant here. There is plenty of other solutions out there. I was merely presenting a simple example.

Dodane przez Vasco Gonçalves w odpowiedzi na

Profile picture for user n002daga
Zamieścił/-a Kai Salmela, pon., 24/02/2020 - 10:25

Hello everyone.

I'm not really sure the meaning around this issue. Would you rather not have the instructions for your AI or car when you need some advice? Should there be none of the regulations when you use your AI or car?

AI is just a tool, that we need to learn use. Granted, that it is very powerful tool and also very complicated one, at least today. Tool isn't responsible of you, your children, your employees or of anything else. Responsibilities should go as they have been at this far, until we have a proven concious AI available. ( After that, it is whole other game ongoing ).

You are right, that we need to view the responsibilities of our industry, research, services, education and everything else that may contradict our rights or security. Discussion about this topic has already begun, and your question who should be leading this development is a good one. Who would You rather suggest for this task?

This EU machine of regulations has its advantages as well as limits, but it is our main mean to steer this development. Basically we all need more of education around AI in order to master it and its potential. Not everybody needs to be equals on understanding how this tool work, but we need to have understanding how to use it and how to do it safely, just like we all needed to learn how to use our computers today.

Questions are a good start, providing opinions goes for a bit further and when we reach answers the mapping of territory is ongoing. There may not be a finish line in this race at all, but at least we cumulate more information on this data island and increase the shoreline that we do not know yet.

Dodane przez Kai Salmela w odpowiedzi na

Profile picture for user n0036syy
Zamieścił/-a Vasco Gonçalves, pon., 24/02/2020 - 12:12

Well, I agree with you until a certain point.

OK, the end-consumer needs to learn the use of AI, but is she/he willing to do it?

We all know that humans, in general, are not comfortable in learning new things. I can see that with my students (more than 25 years old) and clients. Certain people only use 1 to 2 programs in computers during years in a row, sometimes decades. Many teens are excellent in games and some social networks, but fail in more simple "technical" areas.

The task relies on the designers - they have to simplify as much as possible the general use of the AI.

Coming back to the car analogy:

  • How many of you use all the buttons in their cars?
  • How many of you know by heart the place of every button?
  • How many of you know by heart all the symbols shown on the dashboard?
  • ...

Despite the automotive constructors making huge progresses in this area, still a lot remains to be done on many appliances that we use everyday and these have to be standardized. The AI is no exception - if not then we will have major problems.

I like this warnings on Linux systems:

  • Respect the privacy of others.
  • Think before you type.
  • With great power comes great responsibility.

Are people heading it? Not all!

When the end-consumer reads these warnings, either she/he will shun away from the AI or totally ignores these warnings. Very few will apply it.

In my opinion, the AI should be fully "security" proofed as are some rare IT machines. Instead putting the responsibility on the end-consummer, the manufacturers should bear the guilt if something goes wrong, and not like today where the GDPR puts "solely" the guilt on the businesses (of course some never update their data-servers). Very rarely the hardware and software manufacturers are asked into accounts.

Will the same happen with AI systems?

 

Dodane przez Vasco Gonçalves w odpowiedzi na

User
Zamieścił/-a Matthieu Vergne, pon., 24/02/2020 - 22:50

I completely agree that AI systems are mere tools. And they should be considered as such. When you use a knife in your kitchen, you don't put the responsibility of your wounds on the knife. And if you want to use a chainsaw instead, well, nothing I can do for you.

An agenda app is the knife, an AI system is the chainsaw. If some AI systems should be used only by professionals, then regulations should impose a licence on them. For the rest, the user is responsible for what he does with the tool, unless you can prove a disfunction, in which case this is the manufacturer who is responsible. AI systems can be unreliable by nature, like machine learning systems, in which case either the manufacturer provides it with guarantees, thus involving his responsibility, or he doesn't, in whichc ase the user is responsible.

Shortly, nothing new under the sky.

Dodane przez Matthieu Vergne w odpowiedzi na

Profile picture for user n0036syy
Zamieścił/-a Vasco Gonçalves, pon., 24/02/2020 - 23:33

In this context, I have to disagree with you in 'not giving responsibility to manufacturers'.

On the market, there are shoddy expensive routers (between 300 and 500 EUR), which people buy it with confidence - they do not understand the intrinsic of these machines.

Another important undisclosed failures in the manufacturers’ hardware, since years, was revealed recently - 3 big brands: Lenovo, HP, Dell, among others. Here is a recent article:

"Millions Of Windows And Linux Systems Are Vulnerable To This ‘Hidden’ Cyber Attack"
Forbes

Sometimes we send them friendly reminders, but they ignore it until someone threatens them with a lawyer.

So, I think that manufacturers have a good part of responsibility in security shoddy products.

The same responsibility will fall on the AI systems. It is up to them to anticipate any problem that may arise. The reason is that more and more children use, nowadays, these apps and in the future physical ‘robots’.

Dodane przez Vasco Gonçalves w odpowiedzi na

Profile picture for user n002daga
Zamieścił/-a Kai Salmela, wt., 25/02/2020 - 13:07

 

 yes, i do agree with You in this .

Designing AI as the easiest way to use is very important task, maybe even such a tool that we really do not need to think at all?  Our Cell-Phones seem to have algorithms that try to ease the use of that gadget.

In an ideal world we could design everything of an AI in a such manner that everybody can use them without difficulty and in the safest possible way. This should be our goal too. I'm just affraid that people and companies are not into that (goal) but try to develop and use also AI products as they see fit.  That will result to mishapps that could be avoided by regulatory work and by reading guides.  I know that not everybody read the manual or care about laws, but those need to exist for the ones who do read and obey. They tend to be the ones who drive the development and they also spread the word of the right way how to use AI Products.  If i may refer to the car - if you get a strange warning in your dash and cannot continue, You surely read the manual or ask from somebody who know.

I'm lucky enough to live in finland where general public is educated to understand what AI is and what one can do with it. Not everybody get it ofcourse, but most of the public is aware and now that everybody learn basics of AI in the school, we can expect also some responsibility from people in the times ahead.  Laws and reglations are highly respected also among companies, so they do their best with the development of the AI Products that they are well within all limits. I'd expect that this is the case with the rest of the EU in the coming years(?).

Discussion around this topic is most welcome, and all who are able to do something for the safe and reliable AI by their regulatory, standardization, law making or educational work are really my heroes!  They lay the path for us to walk, manufacture and invent.

Profile picture for user n0036syy
Zamieścił/-a Vasco Gonçalves, pon., 24/02/2020 - 10:22

I welcome every comment. Nobody needs to delete their comments.

There is no right or wrong comments, we are just discussing certain possibilities.

Profile picture for user n0036syy
Zamieścił/-a Vasco Gonçalves, pon., 02/03/2020 - 09:27

Let’s take the hypothesis that someone/company already has a full functional general AI (like the specialists would call it, the ‘singularity’). We know that the EU commission is in its baby-shoes regarding the ethics of AI – it will take, my guess, between 5 to 10 years before a viable ethic code is out and improved.


With such an AI, our world would be a better place to “live”, or not:

  • What about unemployment? Could people be retrained in AI related jobs or even other jobs?
  • Throughout history, societies have been/are still biased, could the AI be trained to be neutral? If yes, what would the determining factors? We live among different cultures in Europe (not speaking about the rest of the world, even a higher contrast).
  • How to secure the AI against black hackers, other criminals, even against the ‘rogue’ military?
    Read this article: How to control the Artificial Intelligence? LinkedIn
  • Coming back to the ‘singularity’:

               - Would the humans have the right to disconnect or kill a sentient human being?

               - Then, what about the sentient androids or ‘robots’, would humans have the right to disconnect them?

 

How would you answer these questions?
 

Profile picture for user n002daga
Zamieścił/-a Kai Salmela, pon., 02/03/2020 - 08:26

Hello everyone.

This is interesting dilemma:  What are our rights over an sentient mechanical/electrical being?  Granted - the day when there will be one is far in the future as we see it today, but we should form our opinion and values for that day already.

I'd like to approach these dilemmas thru some questions for the baseline:

- how do we recognise sentient being today? As far as i know, there's no good definition for that either. For some reason we've tried to make the difference between human and other living creatures and define that as our devine concept of sentience.  As we have had the tests to prove this , we have failed , since more and more animals seem to pass our tests as well, if we are fair with our judgement. We just have'nt been observing enough, and it might be quite likely, that we would'nt want to learn that AI is sentient

- how do we change our behaviour, if we can recognise a sentient machine? Our record is less than perfect in this point of view, even if we think of other human beings and it may not be a good one with the machines of the sentient kind either. If we can deduct that there is a sentient AI, do we want to accept that, even if it would have very basic rights? How much would money weight in our opinion?   It might boild into the point, that we're not going to respect sentient AI unless we have to - for the compulsatory reasons.

- How would we regulate the sentient AI ? Every human in this world is under a certain package of laws, regulations and the set of behavioral rules which are also enforced in order to ensure safe and working society. If and when there will be a learning and sentient AI, it cannot be under the same rules than humans. Very basic way to steer human behaviour is to reward and punish which You can find as a base of all law systems that we have today. How this is applicable to an AI system ? Does AI have some basic needs that it recognizes and try to ensure in it's posession? Can we tinker AI code if we have recognised it as a sentient being or do we have only external ways to correct it? If latter, should we build such a senses for it like pain from the begining? Can AI even reach sentience without a full set of senses just like human has?

I think that these are only the fraction of the questions that we should go thru, and these questions are as much a journey into the human kind as into resolving the dilemma with the Sentient AI system that we might have in the future.

I hope to get some interesting opinions about this issue.

 

wbr. Kai Salmela

Dodane przez Kai Salmela w odpowiedzi na

Profile picture for user n0036syy
Zamieścił/-a Vasco Gonçalves, pon., 02/03/2020 - 20:26

Kai: How do we recognize sentient being today?

Vasco:
Animals as humans are sentient beings. They experience emotions, there is a certain degree of “intelligence” (humans being at the peak), awareness, …
Basic blocks to build a basic sentient android exist already, but not a full functional intelligent one as a human (still some way to go).

 

Kai: How do we change our behavior, if we can recognize a sentient machine? 

Vasco:
The behavior will be the same as history shows, generally speaking “humans are killers” (either to protect oneself, to protect their families, and some going so far to protect their material assets). So, what difference does it make, if we recognize a sentient android? Do we feel threatened?
Regarding basic rights, already many nations are discussing to tax companies if robots are used, to redistribute or pay those who lost their jobs for a robot (if they are honest about redistributing this is off topic).
Sentient androids need repair, maintenance, … It would only be fair to be compensated. Or what?

 

Kai: How would we regulate the sentient AI? 

Vasco:
A sentient being, e.g. animals and humans, learns from its ‘parents’. Granted, these beings are already preprogrammed to act in this direction, e.g. love, have a conscious, … (of course, these must be developed, we hope in the right way).
Imperfect humans behave imperfectly to a certain extent but have a conscious that tells them what is wrong and right. Everything beyond that is called ‘wickedness’. By behaving wrongly, what rights do we have to tinker with their DNA, so to speak, to correct their behavior? I think it would apply to androids, too.
What is the difference between a cyborg (mechanized human, e.g. 90% mechanized due to whatever reason), and a sentient android?