AI : why corporate resistance is a blessing in disguise?

By Loubna Bouarfa, Member of the High-Level Expert Group on AI

 

As a machine learning scientist and CEO of a leading AI company, I have devoted my working life to building systems in which artificial intelligence can improve human lives. Artificial intelligence’s ability to process large quantities of data goes far beyond the abilities of any human expert. From enabling personalised evidence-based healthcare for patients and better foster care for children, or its use for complex medical research, AI has the potential to effectively transform our society for the better.

 

Worldwide discussions are now taking flight on ethical principles of AI, most of these discussions are driven by the moral and social resistance to the impact of this technology. I find that this resistance has great potential. We must embrace it as a wake-up call; treating it not as a direct cause of alarm, but as a valuable societal warning sign.

 

In a series of blog posts, I will share my perspectives on the different resisting groups. In the first blog I discussed the cultural resisting group, in this blog however I address the corporate landscape resistance of AI.

Corporate resistance

In the corporate landscape, only 25% of companies in Europe are adopting AI. This means 75% of companies are facing barriers to adoption. As a vendor in this space, I have witnessed corporate resistance to AI mainly in the middle and top management groups. So, what are their main barriers?

 

Intelligibility

The corporate world lacks a thorough understanding of AI and how it differs from traditional software. AI shifts the software development workflow and requires wider access to corporate and third party data, which becomes a main barrier for adoption.

 

Traditionally, developers have written software code as a sequence of hard-coded rules, such as: {if (condition) {Action 1} else {Action 2}} or {for {value in sequence}, {Action}}etc. The human instructs the machine, line by line. Once the software is developed, you can run it on any computer.This type of software doesn’t need to learn from data, it also doesn’t need to update itself or adapt to changing environments. By contrast, AI is about a software that automatically updates itself. It learns to learn from corporate data and cannot be easily installed on a computer. For an AI system to work, company leaders must share their data to first train a model, and keep feeding it with data for continuous updates.

 

This is a big shift in mindset for corporate business. Having to open your books to AI vendors and share your data to build this software can be a key deterrent - especially in the uptake stage.

 

Risk aversion

Heavily regulated industries are generally more reluctant to innovate. Such industries are quick to identify risks, such as potential lack of control with cloud data sharing, or the risk of making analytical mistakes as a result of AI output.

 

However, while this deters them from early adoption, they tend to quickly follow from surrounding successful stories. Corporate resistance is far from absolute. In my work as CEO of OKRA Technologies, as an enabler of the use of AI in businesses, I see early adopters from traditionally risk-averse groups who decide to invest in AI innovation. These individuals are aware of the benefits and risks surrounding AI, but nonetheless choose to make a calculated risk to meet business needs. They often have personal preference for innovation and act as industry leaders.

 

Organisational structure

 

For middle management individuals, AI signals a threat to the organisational structure. AI promises to shrink the number of direct reports, which may result in a flattened hierarchical structure and impact a manager’s sense of pride and importance. We should not underestimate the extent to which resistance can be emotional, or emotional-political.

 

Vested interests

It cannot be ignored that many corporates stand to lose money, influence or relevance as the use of AI increases. For example, certain organisations may obstruct the use of AI by preventing the spread of their own large proprietary data sets. Conventional consultancies may emphasise traditional values of bespoke consulting, emphasising the human personalisation of their delivery. Traditional software business will underline the importance of rule based software without the need to share massive amounts of data, and the importance of implicit learns that can be designed by humans.

How to reduce the risk of adoption:

  • The groups above highlight that AI is currently difficult to adopt in the B2B (business to business) context. To increase trust in AI, we need to create sandboxing frameworks for testing new AI driven technology and also for regulatory testing of new business models such as the case in Fintech industry - this requires collaboration on the eco-system level.

  • These groups also highlight the need to explain our technology in more simple terms, and make our methodologies transparent in how we achieve outcomes and support decision-making for different levels of the organisation. The key performance indicators for AI adoption should not only be accuracy and direct results, but also adherence to AI-driven insights throughout an organisation.

  • These groups remind us that technological change can be emotionally difficult and potentially risky for business outcomes. Business owners are right to remember the strengths of their original processes and, to a certain extent, to protect them.

  • However, contrary to public debate, AI does not necessarily have to be disruptive.Instead, it canaugmentcurrent processes to be more effective and efficient. There is no need to overhaul a grand functioning system, but there is almost always a need for improvement. With sufficient attention to design, AI promises to slowly but surely enable that improvement.

  • Certain groups are decidedly threatened by AI and will form a fierce resistance. Regulators must consider the balance between data ownership and accessibility in order to allow AI to be properly and comprehensively adopted.

  • As policy-makers, we should consider how increasingly irrelevant skills may be re-applied in different ways - or if not, what the role of regulation could be in ensuring the development of social security and new working skills.

Ετικέτες
impact Business AI Ethics Structure resistance corporate

Σχόλια

Profile picture for user nrzeszpa
Υποβλήθηκε από τον χρήστη Pawel RZESZUCINSKI στις Πέμ, 01/11/2018 - 14:55

Interesting breakdown. Thank you.

When it comes to the reluctance to share the data there is a big hope in the recent developments in the area of performing AI-related computations on encripted data (e.g. homomorphic encryption), so that the content being analyzed never gets revealed to the owner of the AI model (also, the model can get encripted to protect the know-how of the model owner). This is likely to solve a lot of data-privacy headaches in the near future.