Can Regulations Ensure Trustworthiness & Keep AI Loyal?

Artificial Intelligence (AI) has become one of the most widely discussed technologies over the last half-decade. These days it seems to be everywhere in the technology press. Why? Because it offers practical solutions to many long-standing technical problems across application fields ranging from computer vision, to speech processing, medical imaging, and financial services. Across these fields AI technology has established a basis for significant recent technology innovations including autonomous driving, connected healthcare services and many other markets.

But no one can deny that there are some darker sides to AI – it is well known that certain kinds of AI, for example machine learning systems, are susceptible to adversarial attacks and can sometimes be easily fooled; deep-fake technology means we can no longer trust our eyes. Tay, a Microsoft chatbot designed to post on Twitter, learned in less than a day to post such highly inflammatory and offensive tweets that it had to be taken offline. The list goes on.

Most example of today’s AI high-profile success stories are based on machine learning, and most notably deep neural networks, a technology originally explored in the 1980s, but which only became practical when modern GPUs (Graphical Processing Units) were introduced to train these networks. The availability of more computational power and large dataset enabled a rapid evolution in neural networks – a technology inspired from the way the neurons are inter-linked in the visual cortex of the brain.

Typical machine learning algorithms are built and tested in two stages: firstly a learning or training stage where the model is tuned to respond to a large, representative set of input data samples, and secondly an execution or inference stage where the model  is deployed in a system or product to take decisions based on previously unseen data samples. When training is done offline (e.g. in the lab), we usually refer to it as ‘off-line’ AI. When the training & inference happen at the same time (during normal operation), we refer to this as ‘on-line’ AI, e.g. reinforcement learning. It is here is where many of the issues surrounding the trustworthiness of AI arise.

In both types of AI, as the ‘real-world’ data that the algorithm interprets is often substantially variable it is possible for the algorithm to be exposed to data samples that trigger an incorrect response: false-positives or false negatives, if one is working in a classification setting. Of course, any machine learning solution is susceptible to such errors, but as more conventional solutions are built up of understandable building blocks that are designed according to well-established and robust engineering techniques and practices we can better understand, control, and apply regulatory guidance to these. Now, when there is an ‘error’ it is possible to examine each of the building blocks of the engineering system and determine why that error occurred and respond appropriately.

With AI based on neural networks, this is not so easy for two reasons. Firstly, the complexity of these neural structures is such that they cannot be broken down into easily understandable building blocks that can be individually regulated. Hence, they are often referred to as ‘black’ boxes. Secondly, these networks are very dependent on the data that is used to train them. If the data that is used is not sufficiently representative of the real-world data that the algorithm is asked to interpret then it will not perform correctly. On-line AI aims to solve these problems by taking online feedback from operational results (e.g. indications of incorrect decisions) and learning not to reproduce operational errors. However, this online training relies on feedback that may be unreliable and can lead to unexpected operational behavior (e.g. learning to replicate poor human judgements and operator errors) or worse through learning overtly bad behavior (e.g. through exposure to malicious social media environment as in the case of Microsoft’s Tay bot).

Note that AI systems are very much like young children – they show an incredible ability for learning, but if exposed to the wrong environments and stimuli they will quickly learn improper and unethical responses. The increasing availability of powerful software tools for AI training and deployment increases the risks of AI being exposed to and learning from such environments.

At the Imaging Division of Xperi (formerly FotoNation Ltd.) neural-based Artificial Intelligence is very much at the heart of our current and future core products and services. Our engineering teams have a long tradition of disruptive innovation in the field of computational imaging. The company pioneered computational imaging solutions for the digital camera industry, partnering with Nikon to introduce the first ‘in-device’ solutions for instant redeye removal, face detection & tracking, sweep panorama, image stabilization, smile shutter, and blink prevention – core features that created today’s consumer media experiences on smartphones.

Today our teams are focused on building the enabling technologies for the next generation of consumer devices and technologies that will enable extraordinary experiences in everyone’s lives. Our engineers are working daily to develop and evolve advanced neural network-based offline AI solution to deliver safer vehicles (Driver Monitoring Systems), better device-level data privacy, more sophisticated and accessible connected home security and end-to-end control of personal media and content. Better safety, enhanced data privacy and more immersive home entertainment experiences and connected wellness solutions are at the heart of our future technology roadmap and part of our corporate philosophy. Artificial Intelligence is central to this future vision.

Our engineers are very conscious of the challenges of AI – the potential for one in 10 or 100 million (‘black swan’) events that will cause AI to make an incorrect decision. But it is important to realize that today’s digital imaging applies hundreds of filters on individual frames of a video stream that is acquired at 60 frames-per-second; thus, we see ‘black swans’ daily. In fact, best practice in the testing and validation of production-ready neural networks revolve around stringent quality assurance processes. These are central to delivering reliable and robust AI for broad consumer deployments.

A second core theme in best practice revolves around the importance of data. In order to customize a neural inference architecture for optimal performance it is important to be able to customize both training and validation datasets through advanced data augmentations, the generation of complimentary synthetic data samples and at times building 3D digital twins that enable us to obtain multi-view 2D perspectives on reality. Such advanced data tools enable our solutions to be robust, but many aspects of such technologies are kept as trade secrets. Data acquisition is expensive and building training datasets with accurate ground truths even more so. But they are critical to our success measured in terms of resilience and robustness of our AI solutions in practical use cases.

The corporate reputation of Xperi relies on this combination of advanced test & validation methodologies together with state-of-art data curation, augmentation, and generation toolkits. Such state-of-art tools represent many 100’s of engineer years and many millions of euro investment. But most importantly they stand for our corporate commitment to quality, reliability, and trustworthiness.

In a nutshell, and in common with generations of innovative technologies it is the engineering culture from which the AI originates that must be trustworthy to give the consumer confidence about AI and its many applications. Offline AI is already regulated sufficiently via the quality assurance requirements in the markets in which it is being deployed; and companies that prove themselves to support a strong culture of trustworthiness will succeed in these markets.

For on-line AI the situation is somewhat different; as discussed it is more difficult to constrain how an AI learns in unsupervised, or semi-supervised environments such as social media. In such cases there can be huge benefits, but there are also significant risks that incorrect responses and patterns of behavior will be learned. Online learning does require carefully considered codes of practice and guidance to help evaluate and mitigate the associated risks. Deployments of online learning for AI’s should be carefully considered to determine how best to guide and regulate their use without hindering innovation and progress in either the on-line of off-line world.