SOTAVerified

Defining and Quantifying the Emergence of Sparse Concepts in DNNs

2021-11-11CVPR 2023Code Available1· sign in to hype

Jie Ren, Mingjie Li, Qirui Chen, Huiqi Deng, Quanshi Zhang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper aims to illustrate the concept-emerging phenomenon in a trained DNN. Specifically, we find that the inference score of a DNN can be disentangled into the effects of a few interactive concepts. These concepts can be understood as causal patterns in a sparse, symbolic causal graph, which explains the DNN. The faithfulness of using such a causal graph to explain the DNN is theoretically guaranteed, because we prove that the causal graph can well mimic the DNN's outputs on an exponential number of different masked samples. Besides, such a causal graph can be further simplified and re-written as an And-Or graph (AOG), without losing much explanation accuracy.

Reproductions