Let’s talk about AI

by Gry Hasselbalch, co-founder DataEthics.eu, member of the AI High Level Expert Group

The way we talk about AI limits us in what we think we can do with it. If we want AI that benefits a human evolution, we need a way of talking about it that respects our human values. 

AI is everywhere. And nowhere. Because what do we actually mean when we talk about AI? Is it a sophisticated improvement of our outdated human software? Is it a possible Sci Fi scenario where an out of human control machine out competes human kind? Or is it a commercial trade secret? 

Words are very powerful and as abstract as they might seem sometimes, they actually have real consequences. Real laws are implemented based on the particulars of language, real business decisions are made and real people’s lives are affected by the specific uses of words and the worlds that they portray. Evidently, the way we talk about AI defines what we think we can do with it and ask from it. Here’s a few musings on AI:

The founder of the singularity movement Ray Kurzweil - Humans are machines: “Biology is a software process. Our bodies are made up of trillions of cells, each governed by this process. You and I are walking around with outdated software running in our bodies, which evolved in a very different era.” (2013)

The late scientist Stephen Hawking – AI is a free agent: “The development of full artificial intelligence could spell the end of the human race (...)It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” (2014)

The co-founder of Google, Larry Page – AI is a (Google) service: “Artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. We’re nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on.” (2000)

These are kaleidoscopic views on AI and they are representative of common ways of describing the AI development today. But they do not address the core issue at stake in this debate: What does it mean to be human? What role should technologies play in a human evolution? What do we want AI to do for humanity? 

One might even argue that they describe AI in a way that cloud our judgement and limit us as humans in what we think we can do with AI: 

If humans are just software, then of course we need an update. All software does. Doesn’t it? Say no more.

If machines are our superiors, then it is already too late. We are doomed. Let it go. 

If AI is just one company’s great business adventure (a better search engine, a smarter health care solution etc.), then it is also the greatest trade secret. So keep your nose out of it. 

These ways of describing AI leave us powerless. Before we can move on to a constructive discussion of the ethical implications of AI, we need to choose our words with care. Let’s start from this:

Respect ourselves for what we are – we are humans with specific qualities(not predictable software) – we have will, creativity, unpredictability, intuition, consciousness. (the day we’ve managed to understand these human qualities fully in science, we can start talking about replicating them)

Let’s approach AI as manmade data processing systemsthat can be managed and directed. Not as an uncontrollable free agent. 

And lastly AI is a shared good in society. It is not a trade secret, one company’s success and property. 

And then there are a myriad of things we might ask of the development of AI as individual human beings and as human communities. For example:

We could think of innovation as a human endeavor, build technologies that extend human agency, design built in means for individuals to influence and determine the values, rules and inputs that guide the system. 

We can think outside the box of profit-oriented AI to the development of non-profit AI, AI built for social good or even AI designed, owned and controlled by citizens. 

We can think of a plethora of governance (of AI) approaches that responds to many interrelated components, human and non-human factors - hardware, software, laws, standards, people, education, representing a complexity of interests. 

But we can also create laws that address and support the distinction between human and non-human actors, e.g. that an AI system should always make itself known as an AI agent. 

And we can even find areas in the human evolution where AI should play no role (the “red lines”). 

Tagi
AI data ethics ai ethics

Komentāri

Profile picture for user njastrno
Iesniedzis Norbert JASTROCH Ot, 09/10/2018 - 17:42

Dear Madam,

thanky you very much for this contribution.

It is necessary to again and again reiterate that AI is not the physicalisation of intelligent life

The materialist (or: physicalist) way of thinking and talking about the world is a reductionist program, putting out of view everything that is non-materialist, and building the models of the world strictly physical. By that, it reduces our possibilities to understand (or understand that we do not understand) essentiel phenomena in the world, and our capabilities to adapt to the world that builds our living environment. 

When joining this AI alliance, I was hoping to be able to contribute to  a more adequate way of dealing with the opportunities AI technology offers, and not to the - imagined -  building of the perfect machine.

I still do so.

Norbert Jastroch

 

 

 

 

In reply to by Norbert JASTROCH

Profile picture for user nlapengr
Iesniedzis Gry Hasselbalch Ot, 09/10/2018 - 21:37

As Henri Bergson put it in Creative Evolution, 1911: "In vain we force the living into this or that one of our molds. All the molds crack. They are too narrow, above all too rigid, for what we try to put into them." 

Profile picture for user n002daga
Iesniedzis Kai Salmela Tr, 10/10/2018 - 09:57

Thank You for this article.

I think we're still way too far from creating a sentient being with AI. Maybe when quantum technology has been around long enough, but who knows. Until that, AI is just another tool that we need to govern and use rightly.

We can and should form a set of rules how we use this technology, even so that war machinery already is using AI for their purposes.

For civilian side, i'd like to suggest that we seek standardization of all different ( full stack ) of an AI in order to making AI more affordable for everyone and getting the development rolling with a faster pace. AI-alliance can be the perfect platform to seek and get together all the best minds of Europe ( and why not globally ) in order to form better and leading AI .

wbr Kai Salmela

User
Iesniedzis Anonymous (nav pārbaudīts) Pr, 15/10/2018 - 16:32

User account was deleted