E(n) Equivariant Graph Neural Networks
Victor Garcia Satorras, Emiel Hoogeboom, Max Welling
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/vgsatorras/egnnOfficialpytorch★ 526
- github.com/lucidrains/egnn-pytorchpytorch★ 520
- github.com/gerkone/egnn-jaxjax★ 14
- github.com/stdereka/egnnpytorch★ 3
- github.com/oumarkaba/channels_egnnpytorch★ 2
Abstract
This paper introduces a new model to learn graph neural networks equivariant to rotations, translations, reflections and permutations called E(n)-Equivariant Graph Neural Networks (EGNNs). In contrast with existing methods, our work does not require computationally expensive higher-order representations in intermediate layers while it still achieves competitive or better performance. In addition, whereas existing methods are limited to equivariance on 3 dimensional spaces, our model is easily scaled to higher-dimensional spaces. We demonstrate the effectiveness of our method on dynamical systems modelling, representation learning in graph autoencoders and predicting molecular properties.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| QM9 | EGNN | Standardized MAE | 1.23 | — | Unverified |