SOTAVerified

Bridging the Gap Between Explainable AI and Uncertainty Quantification to Enhance Trustability

2021-05-25Unverified0· sign in to hype

Dominik Seuß

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

After the tremendous advances of deep learning and other AI methods, more attention is flowing into other properties of modern approaches, such as interpretability, fairness, etc. combined in frameworks like Responsible AI. Two research directions, namely Explainable AI and Uncertainty Quantification are becoming more and more important, but have been so far never combined and jointly explored. In this paper, I show how both research areas provide potential for combination, why more research should be done in this direction and how this would lead to an increase in trustability in AI systems.

Tasks

Reproductions