SOTAVerified

Graph Representation Learning via Hard and Channel-Wise Attention Networks

2019-07-05Code Available0· sign in to hype

Hongyang Gao, Shuiwang Ji

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Attention operators have been widely applied in various fields, including computer vision, natural language processing, and network embedding learning. Attention operators on graph data enables learnable weights when aggregating information from neighboring nodes. However, graph attention operators (GAOs) consume excessive computational resources, preventing their applications on large graphs. In addition, GAOs belong to the family of soft attention, instead of hard attention, which has been shown to yield better performance. In this work, we propose novel hard graph attention operator (hGAO) and channel-wise graph attention operator (cGAO). hGAO uses the hard attention mechanism by attending to only important nodes. Compared to GAO, hGAO improves performance and saves computational cost by only attending to important nodes. To further reduce the requirements on computational resources, we propose the cGAO that performs attention operations along channels. cGAO avoids the dependency on the adjacency matrix, leading to dramatic reductions in computational resource requirements. Experimental results demonstrate that our proposed deep models with the new operators achieve consistently better performance. Comparison results also indicates that hGAO achieves significantly better performance than GAO on both node and graph embedding tasks. Efficiency comparison shows that our cGAO leads to dramatic savings in computational resources, making them applicable to large graphs.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
COLLABhGANetAccuracy77.48Unverified
D&DhGANetAccuracy81.71Unverified
IMDb-MhGANetAccuracy49.06Unverified
MUTAGhGANetAccuracy90Unverified
PROTEINShGANetAccuracy78.65Unverified
PROTEINScGANetAccuracy78.23Unverified
PROTEINSGANetAccuracy77.92Unverified
PTChGANetAccuracy65.02Unverified
PTCcGANetAccuracy63.53Unverified
PTCGANetAccuracy62.94Unverified

Reproductions