Please take part in our “Causabilometer” Survey

The Human-Centered AI Lab invites to take part in a causability measurement study to test the new causabilometer

Please take part in our “human explanation survey”

The Human-Centered AI Lab invites the international research community to take part in a human explanation survey

Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI

Our paper Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI has been published on 27 January 2021 in the Journal Information Fusion, Q1, IF=13,669, rank 2/137 in the field of Computer Science, Artificial Intelligence:

https://doi.org/10.1016/j.inffus.2021.01.008

We are grateful for the valuable comments of the anonymous reviewers. Parts of this work have received funding from the EU Project FeatureCloud. The FeatureCloud project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 826078. This publication reflects only the author’s view and the European Commission is not responsible for any use that may be made of the information it contains. Parts of this work have been funded by the Austrian Science Fund (FWF) , Project: P-32554 “explainable Artificial Intelligence”.

Causability is important. Why?

Effective (future) Human-AI interaction must take into account a context specific mapping between explainable-AI and human understanding.

The need for deep understanding of algorithms

There are many different machine learning algorithms for a certain problem, but which one to chose for solving a practical problem? The comparison of learning algorithms is very difficult and is highly dependent of the quality of the data!