SOTAVerified

Cell Attention Networks

2022-09-16Code Available0· sign in to hype

Lorenzo Giusti, Claudio Battiloro, Lucia Testa, Paolo Di Lorenzo, Stefania Sardellitti, Sergio Barbarossa

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Since their introduction, graph attention networks achieved outstanding results in graph representation learning tasks. However, these networks consider only pairwise relationships among nodes and then they are not able to fully exploit higher-order interactions present in many real world data-sets. In this paper, we introduce Cell Attention Networks (CANs), a neural architecture operating on data defined over the vertices of a graph, representing the graph as the 1-skeleton of a cell complex introduced to capture higher order interactions. In particular, we exploit the lower and upper neighborhoods, as encoded in the cell complex, to design two independent masked self-attention mechanisms, thus generalizing the conventional graph attention strategy. The approach used in CANs is hierarchical and it incorporates the following steps: i) a lifting algorithm that learns edge features from node features; ii) a cell attention mechanism to find the optimal combination of edge features over both lower and upper neighbors; iii) a hierarchical edge pooling mechanism to extract a compact meaningful set of features. The experimental results show that CAN is a low complexity strategy that compares favorably with state of the art results on graph-based learning tasks.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
MUTAGCANAccuracy94.1Unverified
NCI1CANAccuracy84.5Unverified
NCI109CANAccuracy83.6Unverified
PROTEINSCANAccuracy78.2Unverified
PTCCANAccuracy72.8Unverified

Reproductions