Search website:

23. Jun

Opening the black box: Explainable AI

An increasing dependency of complicated insight models to guide us through everyday business decisions raise interesting dilemmas. Complex AI models can perform tasks, but with machine learning compromises are often made between how the model performs and how well we understand why the model is making the predictions it does. Therefore, artificial intelligence is often referred to as a black box. However, there have been a succession of explainable AI techniques developed recent to help tackle this issue. Why do data scientists care so much about the why?

BLACKBOX 885Figure 1: Some black boxes are better opened up

I see three interconnected issues.

Firstly, explainable AI gives you confidence. When you send your model out into the wild, you want to make sure it is making the correct decisions when it encounters new data points. This is especially vital if you’re training set doesn’t fully reflect the full distribution of data you will see after deployment, despite your best efforts. A famous example is the dog/wolf classifier where distribution was intentionally done wrong to prove a point. Explainable AI was used to locate what areas of the images would form the basis for an accurate interpretation, and the purpose was to highlight features in the images that differed between dogs and wolves. But because most of the pictures of dogs portrayed grass in the background and most of the wolves portrayed snow in the background the machine learning model differentiated on the background rather than the actual subjects. So, the model was no longer a wolf classifier; it was a snow classifier!

Secondly, it reduces bias. Ethically, and thankfully often legally, we want to make sure technology does not discriminate. Checking to see whether your model is picking up (sometimes subtle) data about gender, race or religion about people can have real world consequences. A classic example of this is an Amazon algorithm that was designed to assess applicant CVs, giving them an overall rating. Factors that decreased the overall score included words such as “women’s” (like belonging to a women’s soccer team). This was a consequence of the fact that the model was trained on historical CVs and their acceptance rates, which skewed towards men. Although this bias was corrected, the model was ultimately cancelled due to concerns over other possible unknown biases. So ultimately, you need to know why your model is making the decisions it does, of both ethical and legal reasons.

Finally, it adds value. It will certainly aid my businesses with their problem solving to have a model saying which customers are most likely to churn. But it helps me furthermore if I can see why these customers are likely to churn. Are there issues with one section of the population churning more? Does certain behaviour indicate a likelihood of churning? Is the reason for a particular customer leaving something you can affect? The answers to these questions will strengthen your ability to do follow-up investigations, or to initiate an intervention. In short, the why will typically give you more actionable insights.

In terms of techniques, there are too many to mention them all here, but they broadly fall into separate categories based on two specific distinctions; Are they model dependent or independent and do they provide global or local explanations?

Regarding the model in/dependent distinction it is worth noting that typically, when such methods exist for a given model, they will often be more efficient and/or tell you more information than a model-agnostic method. However, the downside is that it also limits the choice of model. Examples of model dependent algorithms include saliency maps and feature visualizations for CNNs.

When it comes to global versus local modelling you must decide on what you are trying to achieve. A global explanation tells you the why of the model, whereas a local explanation tells you the why of a prediction. Both can be useful. For instance, you may have a model to decide if you should give a patient a certain drug. You might want to be able to tell your patient why that treatment is right for him/her, or even wrong for different circumstances – and for this you would need a local model. Local models include techniques like LIME and SHAP. If you want to ask whether gender is affecting the probability to grant someone a loan, you should concentrate on a global model. Examples of global models include partial dependency plots and permutation feature importance. A recommended free overview on a large range of explainable models is Christoph Molnar’s "Interpretable Machine Learning" .

Explainable AI pinFigure 2: Its important whether to know if you need information about the whole globe, or a few locales.

Ultimately, no technique is perfect, and a model will never be able to explain its reasoning the same way a human can, and sometimes results from explainable AI can be ambiguous or even confusing. Before embarking on an explainable AI project, it is worth considering if the project will generate more information than you can process (especially when it comes to local models) and if this is a risk you’re willing to take.

We often refer to advanced techniques, such as deep neural networks, as a black box. While it’s true that for more complicated models these explanations demand more effort to extract, the value you can gain from opening that black box - is well worth it.

 

 

Bethan Cropp Senior Data Scientist, PhD