Building trust in AI

First thing in the morning, I drink my favourite coffee I order via my virtual assistant, quickly reading a recent statement by Elon Musk about a potential AI apocalypse (proposed by Medium). Checking my mailbox, I destroy with some annoyance several emails recommending I should try this hotel, that restaurant, etc. Finally, I get a vocal message from my online bank warning me that due to my payment pattern, it seems my account was recently hacked. 

In the first 30 minutes of my day, I have already interacted with several different AI-powered systems. And I will keep doing so all day long, as AI has evolved to touch our lives in many ways. It empowers us, but might occasionally annoy, or even scare us if we listen to some Silicon Valley gurus. As we are going to live with it, we do need to understand how to build trust in AI. 

 

Why explainable AI matters

How to build this trust? One way is to promote AI systems that provide explanations for their outputs. First, for legal matters: a banking program has the obligation to explain why a loan application has been rejected. Second, for accountability reasons: would you envision that a medical program recommends treatment X without any explanation? Who is responsible if the treatment goes wrong? Last, for ethical reasons: in Spain, an insurance company used a “black box” algorithm (deep learning) to compute premiums. After a while, they realised that the more foreigners lived in your neighbourhood, the more the locals pay for their insurance, which is not morally acceptable. This ethical aspect might depend on people’s reactions towards AI, as it is driven by history and culture. For instance, the Chinese facial recognition program, that is used to assign social scorings, would never be conceivable in Europe. 

Bear in mind that not all AIs need to be explainable: if I use software to help me invest, I may not care if it provides explanations as long as its recommendations are accurate and make me rich (although it might be slightly different if I run a hedge fund and manage money for which I should be accountable to my clients). Proof of stable and correct behaviour (certification aspect) also increases trust in automated systems. We do not need to understand mechanics to trust our car’s driving system, but we have to be certain that it almost never fails. However, in the long term, explanations are useful in a co-learning process: a program learns from examples provided by an expert, while human expertise improves if the program provides explicit models. 

Beware, explainability is not trusted in the accuracy of the outputs: an AI appeared to have been effectively trained to distinguish between pictures of wolves and dogs, but it emerged later that it had in fact learned to distinguish between snowy and not snowy backgrounds. This program will go wrong if asked to recognise a wolf outside of its natural habitat; programs, like humans, can have bad reasons to give good answers. On a more technical ground, explainability is not verifiability either: understanding outputs does not mean that these outputs are compliant with regulations. 

 

What is explainable AI?

An AI is explainable if its decision-making process can be understood by humans; such a program has to use some kind of semantics, at a level that depends upon the fact that it interacts with a developer, an expert, a user, etc. However, it is easier said than done, as AI is far from being monolithic. Even if they all go under the AI banner, the different algorithms that interacted with me this morning exist in very different paradigms. The methods that have been in a leading position in recent years are related to deep neural networks. Effective at recognising patterns, such as processing images, they are very data-greedy and their main flaw is being “black boxes”; their decision-making process is tough for the average human to understand. 

Building explainable AI means making these numerical algorithms interoperate with ontologies and knowledge graphs. These hybrid systems would benefit from the power of numerical methods and the expressivity of symbolic methods that allow them to interact with humans. It may be the greatest challenge for AI in the years to come; building explainable AI means bringing more transparency into the models, more compliance, better model performances and less ethical bias. In short, it is all about building trustworthy AI-powered systems. 

 

The global push to next-generation AI

Public decision-makers have understood what was at stake and are now acting to enhance regulations that will support this necessary evolution. The European Union has made the first move, adopting in 2016 the General Data Protection Regulation (GDPR), in effect since May 2018. It has imposed, among other things, a right to explanation: “The data subject should have the right not to be subject to a decision, which may include a measure, evaluating personal aspects relating to him or her which is based solely on automated processing and which produces legal effects concerning him or her or similarly significantly affects him or her, such as automatic refusal of an online credit application or e-recruiting practises without any human intervention.” Although some would see this type of regulation as vague, and therefore useless and even potentially harmful, it showed a credible willingness to build trust between Europeans and AI-powered programs. 

Laws provide a development framework, but funding brings tangible developments. In May 2018, DARPA launched an explainable AI program (XAI, see figures below) whose final delivery is a “toolkit library consisting of machine learning and human-computer interface software modules that could be used to develop future explainable AI systems.” 

Following the recommendations of an insightful report about AI by MP Cedric Villani (former Fields medal and advisor to Emmanuel Macron on AI matters), the French government has engaged in the process of dedicating substantial public funding (€30 million for three to four years) to projects led by consortiums of industrial groups, start-ups and AI labs working on Explainable AI. The call for projects should be released during the summer of 2019. 

 

Europe is a potential leader in the race for trustworthy AI

In the context of fierce competition over AI, Europe is often considered as lagging behind the two tech giants: US and China. It is often argued that, while the US explore the future, and China addresses huge vertical markets, Europe loses valuable time in imagining and implementing complex regulations. 

While there is some truth to it, Europe might well lead the race to explainable AI thanks to a favourable legal framework, and hopefully, substantial funding to support the effort. France, the motherland of humanist values and a natural advocate of “AI for humanity,” is particularly well-placed. Prominent researchers like J Pitrat and D Kayser have founded a host of French researchers in symbolic AI, who have not yet been identified as a target by GAFAs. Much is at stake; we are building next-generation AI and need the support of all political and economic forces. We must make this thrilling challenge a priority! 

Jean-Baptiste Fantun – NukkAI

Share this post with your friends

Share on facebook
Share on google
Share on twitter
Share on linkedin

Who Are We?

The foundation gathers thought leaders, researchers, decision-makers, from Asia and Europe, to lead working groups and research projects on the positive impacts of artificial intelligence on our society.

© 2018 Live With AI | All Rights Reserved