Why the term Causability is important.

Explainability or Understanding as used in Explainable-AI is the property of the machine learning model. This is a good first step to understand the model, however understanding (in the human sense) is not only recognizing, perceiving and reproducing, and not only the content comprehension and mere re-presentation of facts, but the intellectual understanding of the context in which these facts appear. Understanding can be seen as a bridge between perceiving and reasoning. From capturing the context, without doubt an important indicator of intelligence, the current state-of-the-art AI is still many miles away – most AI today are simply classifiers. On the other hand, people are very well able to instantaneously capture the context and make very good generalizations from very few data points.  Now, Causability is the property of the human in terms of how good the explanations are with regard to answering the “why” in the context of a problem. It can be seen as a measurement for the “quality of explanations”. I am working on the mapping between Explainability (AI) and Causability (Human), i.e. the mapping between the “machine explanation” and the “human explanation”. It is a mapping, which is essentially important for mutual understanding. If it is a linear mapping it means that the results are transformed in a way that is understandable to the human being without losing meaning – it goes back to homomorphism. The big issue here is that our real-world is most of the time non-linear and non-stationary.

The above said is understandable when we look at the Three-Layer Causal Hierarchy developed by Judea Pearl over the last three decades.

Level 1 Association  with the typical activity of “seeing” and questions including “How would seeing X change my belief in Y?”, in the use-case in [1] this was the question of “what does a feature in a histology slide tell the pathologist about a disease?”

Level 2 Intervention P(y |do(x), z) with the typical activity of “doing” and questions including “What if I do X?”, in the use-case in [1] this was the question of “what if the medical professional recommends treatment X – will the patient be cured?”

Level 3 Counterfactuals  with the typical activity of “understanding” and questions including “Why? Was Y the cause for X?”, in the use-case in [1] this was the question of “was it the treatment that cured the patient?”

For each of these levels we work currently on the development of  methods to measure effectiveness (does an explanation describe a statement with an adequate level of detail), efficiency (is this done with a minimum of time and effort) and user satisfaction (how satisfactory was the explanation for the decision making process). Again we should mention that there are three types of explanations: 1) a peer-to-peer explanation as it is carried out among physicians during medical reporting; 2) an educational explanation as it is carried out between teachers and students; 3) A scientific explanation in the strict sense of science theory. I should clearly emphasize that in our work we always refer to the first type of explanation.

[1] Andreas Holzinger, Georg Langs, Helmut Denk, Kurt Zatloukal & Heimo Mueller 2019. Causability and Explainability of AI in Medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, doi:10.1002/widm.1312.

https://onlinelibrary.wiley.com/doi/full/10.1002/widm.1312

A very good introduction into the work of Judea Pearl can be found in the recent PyData conference keynote:

https://www.youtube.com/watch?v=ZaPV1OSEpHw