SOTAVerified

Realization of Causal Representation Learning to Adjust Confounding Bias in Latent Space

2022-11-15Code Available0· sign in to hype

Jia Li, Xiang Li, Xiaowei Jia, Michael Steinbach, Vipin Kumar

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Causal DAGs(Directed Acyclic Graphs) are usually considered in a 2D plane. Edges indicate causal effects' directions and imply their corresponding time-passings. Due to the natural restriction of statistical models, effect estimation is usually approximated by averaging the individuals' correlations, i.e., observational changes over a specific time. However, in the context of Machine Learning on large-scale questions with complex DAGs, such slight biases can snowball to distort global models - More importantly, it has practically impeded the development of AI, for instance, the weak generalizability of causal models. In this paper, we redefine causal DAG as do-DAG, in which variables' values are no longer time-stamp-dependent, and timelines can be seen as axes. By geometric explanation of multi-dimensional do-DAG, we identify the Causal Representation Bias and its necessary factors, differentiated from common confounding biases. Accordingly, a DL(Deep Learning)-based framework will be proposed as the general solution, along with a realization method and experiments to verify its feasibility.

Tasks

Reproductions