The concept of time for AI

Artificial Intelligence is pretty much based on historical data in order to function properly, exactly like we humans do. Therefore, individual AI programmes must have a concept of time, internationally agreed between the machinery, IoT-powered devices, and AI systems, among other technological event-based operations.  

Up until now, each machine carries its own log, having no access to other logs, and having zero comparison options, so how can a machine or even a human be sure whether the information the machine is carrying is 100% accurate?

Studying distributed ledger technologies (DLT) or commonly referred to as blockchain technology, I came up with the idea where we could construct a ledger accessible and used by every "Smart" device, computers, AI, etc simultaneously. 

We all know that the common public mistakes blockchain for bitcoin, but if you study the aspects this innovative hi-tech field brings with it you can realize how it could be of help.

Each block on a blockchain represents a time-stamp that carries a specific amount of intelligence. After the creation of a block, its components are irreversible, undeniable, and considered as a standard point in the "past". Any possible options from that point and on, would require the creation of more blocks, which again would act according to the first block - meaning that no matter what we're going to store in the second box, it will be time-stamped and stay there forever, it cannot be altered or deleted, it cannot be manipulated or ignored - does that sound familiar?

This is exactly the concept we are following about what we refer to as "Time" for a long time now. What we refer to as "past", is a series of events that we can rely upon when trying to recall historical data, but we do not possess the option to alter, manipulate or permanently delete the data stored in what we refer to as "past" (except in case of serious injury of the organic processor). And again, if we want to make any changes or new choices, we must create new blocks, or in short plan for the future, which on its turn will eventually become a block of the past.

I strongly believe that we should introduce the concept of blockchain to AI, not the way we see it, but the way we see time.

If we manage to create a single blockchain that stores every memory or move every AI does, we will be able very soon to monitor and observe the history of our own evolution. 

Ex: let's say we have 10 different AI's on the same blockchain. Each one of them has separate intelligence and operational capabilities while they all recognize each other since they're sharing a common network that could be pointed to be their version of "Time".

In the first moments already we could see which AI would create links with the other AI and for what reasons. Let's say AI1 might share their intel because they somehow profit from AI2's intel and vice versa, AI3 wouldn't share their intel with AI4 as they won't get anything in return, but AI4's intel is profitable for AI5 etc. 

In a matter of minutes, we would see the first alliances, territories, and wars between different AI systems, something that took us millions of years to understand and we're still working on it. We could analyze and understand individual choices we might have taken in the past, and even predict the future if the machine surpasses our timeline in its own analogical measure. 

 

I am happy to learn your thoughts on this, and how do you think the machinery could perceive time?

Thanks in advance.

V

Etiquetas
blockchain AI time concept futurism dlt

Comments

Profile picture for user npennise
Enviado por Elio PENNISI em Ter, 06/19/2018 - 15:45

Interesting, but that would imply that several AI systems pool their knowgedge. As AI machines work on copyrighted algorithms whose info is an asset, would you still think that your blockchain idea could be viable?  

Em resposta a por Elio PENNISI

Profile picture for user n0028kir
Enviado por Vladimiros Pei… em Ter, 06/19/2018 - 15:53

I believe that AI, same as IoT will be eventually regulated and matured under international standards. EU-C already has several work groups working on the matter. Take MIDI for example (1982), where major music instrument manufacturers were eventually forced to work together in order to create a basic "language" understood and used by all kinds of musical instruments carrying some sort of a digital interface. AI and IoT systems might seem "private" and "monopolic" at the moment, but for macroeconomy's sake, which is based on faster, more accurate, and reliable transactions of information and micro-payments global industrial leaders will be "forced" to comply for their own sake. 

Profile picture for user njastrno
Enviado por Norbert JASTROCH em Ter, 06/19/2018 - 18:23

With a group of children we did the following exercise.

They prepared a wooden board so that identical coloured wooden sticks can be applied in  such a way to form the digital figures 0 to 9. They were adviced to exchange the wooden sticks every minute to show the actual time (hour and minute).

Now the question: Where is time in this setting, and does the setting, as a whole or in its parts, perceive time?

Imagine now a picture of the whole setting be recorded and stored every minute. The series of pictures obviously form something like the past. Can it be analyzed to understand the past so that the future can be predicted with respect to "who of the kids will apply the next stick to the board?"

 

Em resposta a por Norbert JASTROCH

Profile picture for user n0028kir
Enviado por Vladimiros Pei… em Ter, 06/19/2018 - 19:21

Very interesting experiment, sir, thank you for the posit.

Not only I am certain about the predictive possibilities underlying behind the complex processing mechanisms we're currently building, but I am convinced that this was the initial reason for why hundreds of billions are 'thrown' each year into analytics and hi-tech r&d programmes.

Unlike us, a machine can recall billions of memories or timestamps at the same time. That would allow a sophisticated pattern recognition to be formed. Of course, machines won't be fed with all our historical data (raw or processed) on the first attempt, but small experiments like the monitoring of a life-cycle of a plant could be a good start.

As the plant grows, at first it will spread a leaf on the left side, and then two leaves on the right side of the plant, and it will eventually grow to a point from where we could certainly predict its future behavior thanks to the concept of fractals, (which btw was the main reason we have the chance to even build supercomputers, not to mention use them), and Fibonacci or Fibonacci-like patterns. 

After a specific period, the plant repeats the process we saw in the beginning, and from the first confirmed loop and on, we have the ability to predict or even manipulate the way the specific plant should keep growing.

We may not understand all the data we carry, from ancient manuscripts to texts that make absolutely no sense to us (that's why we baptized them as "mythology" - not because they are fiction, but simply because we cannot compare or confirm any of the writings according to the current state of what we refer to as "Real life".) But, sophisticated AI systems could easily make their own sense out of it, find patterns, loops, and other checkpoints in our own timeline we might have skipped. Ancient Greek philosophers may be difficult to understand, but what if understanding them is not the point?

I believe that future AI systems will manage to read, analyze, and organize data so fast, with such an efficiency that we will be able to start seeing the first loops in our own tiny universe (if it didn't happen already and we're just not aware of it at the moment), as the respective AI will experience the full process the same way we experience the growth of a plant. 

Millions of years of movements, thousands of years of monitoring and recording, hundreds of years of preparation for the next level of self-sustainable processors, and last a couple decades of top-grade analysis will give us the answers we've been looking for some thousands of years already.

Gianbattista Vico in his book La Scienza Nuova, accurately states that history is based on words we invented and used during the time we attempted to explain history. These words would make absolutely no sense to us nowadays and vise versa. Some might say he is too much of a constructivist, but logically he would make much more sense than those certain about a man with a beard behind the clouds invented everything. 

In ancient Greece, there are concepts and words like "Oracles", which we consider to be "mythological" creations, or concepts that don't even exist, even if for some unknown reason we keep carrying this information for thousands or in some cases tens of thousands of years.

An oracle won't make sense to us, but what if it does sense to a machine?

In ancient Greece, there was Delphi, a spiritual place usually visited by kings and leaders of the era, seeking for advice and even accurate predictions of what's to come. We find it silly that a man would talk to a wall seeking assistance, while we find it completely normal talking to IBM's Watson, seeking for assistance. That is funny at best, if not a shame since it is the most powerful limiter of our comprehensive abilities.

I am looking forward to seeing next-generation AI systems, decentralized intelligence, and autonomous e-governments at work, and I have high hopes when it comes to possible breakthroughs carried out by AI systems, some of which we would never even had conceived, to begin with. 

Em resposta a por Vladimiros Pei…

Profile picture for user njastrno
Enviado por Norbert JASTROCH em Ter, 06/19/2018 - 20:18

Anyway, there is the need to develop a sound concept of f. i. "pattern recognition" first, and then an agreed understanding of the terminology used (f.e. prediction vs. necessity vs. probability, physical time vs. modular time, etc. ). 

In ancient Greece there were also people who, for example, not only discovered the pattern of the rightangled triangle, but were able to find that the sum of the square of both cathetes equals the square of the hypothenuse. I wonder how a machine would manage to do something like that. 

Being enthusiastic on the potential of AI is one side of the medal, being critical (in the scientific sense) on its limitations the other one. Sustaining AI should build on both.