The FWF project P-32554 “A reference model of explainable Artificial Intelligence for the Medical Domain” has been granted a total volume of EUR 392,773,50. The progress of statistical machine learning methods has made AI increasingly successful. Deep learning exceeds human performance even in the medical domain. However, their full potential is limited by their difficulty to generate underlying explanatory structures, hence they lack an explicit declarative knowledge representation. A motivation for this project are rising legal and privacy issues – to understand and retrace machine decision processes. Transparent algorithms could appropriately enhance trust of medical professionals, thereby raising acceptance AI solutions generally. This project will provide important contributions to the international research community in the following ways: 1) evidence in various methods of explainability, patterns of explainability, and explainability measurements. Based on empirical studies we will develop a library of explanatory patterns and a novel grammar how these can be combined. Finally, we will define criteria/benchmarks for explainability and provide answers to the question “What is a good explanation?”. 2) Principles to measure effectiveness of explainability and explainability guidelines and 3) Mapping human understanding with machine explanations and deploying an open explanatory framework along with a set of benchmarks and open data to stimulate and inspire further research among the international machine learning community. All outcomes of this project will be made openly available to the international research community.

  • Project period

    2019 – 2023

  • Keywords

    explainable AI, transparent machine learning, causality