AI for security and privacy

Yes, that’s right: Artificial Intelligence can be your friend when it comes to security and privacy. There is an intensifying debate about AI as a technology enabling privacy intrusion at an unprecedented scale. At the same time, AI-enabled solutions also create new exploit challenges for system security.  A third concern is the amount of training data required for AI solutions. In many cases solutions are based on loads of personal data posing even more challenges for people’s privacy.

Much has been said about how we all need to let go of our privacy to harvest the benefits of AI. It has even been argued that Europe’s new and strict data protection rules undermine AI innovation. But there is a new trend emerging: to use AI for improved security and for maintaining people’s privacy.

A range of innovative and often young companies use machine learning or intelligent pattern recognition technologies to detect security threats in computer systems or to help keeping personal data private. As an example, Austrian start-up mostly.ai uses deep learning neural networks to anonymize data. It creates synthetic data models with similar statistical properties than those in the original, non-personal data set. This retains valuable information for most applications that would usually use personal data.

Companies like mostly.ai even benefit from new, strict privacy regulation. Several jurisdictions around the globe are taking inspiration from Europe’s privacy rules thereby also pushing innovation for privacy-preserving technologies. In parallel, increasing concerns about AI security have created new research and innovation challenges for researchers in the AI and security fields.  These developments demonstrate how innovation and policy can mutually stimulate each other. Most importantly, it shows that we should not feel victims of technology develoment, but rather take the opportunity to influence their design.

We discussed these topics at a conference on the role of AI for privacy and security in Singapore on April 9: https://www.epicproject.eu/index.php?id=106. The event was organized by the EPIC project to further EU-Singapore IT cooperation. You can find an event summary on CORDIS: https://cordis.europa.eu/event/rcn/147022/en

Étiquettes
Artificial Intelligence privacy security AI international collaboration Singapore New Zealand

Commentaires

Profile picture for user ntsiocon
Soumis par Constantinos T… le ven, 24/05/2019 - 13:35

This is an open debate Erich, one that it's going to last for years to come.

I wonder whether AI can actually be trusted (today) as a catalyst for security and privacy.

In fact I wonder if AI can ever be trusted. Let's not forget that AI is a machine created  by humans. The algorithm can be manipulated by humans. For good and bad.

 

It's kind of an oxymoron scheme to discuss the trustworthiness of AI. We are not after "regulating" artificial intelligence itself. But its creators.