SOTAVerified

Integrating Structural and Semantic Signals in Text-Attributed Graphs with BiGTex

2025-04-16Code Available0· sign in to hype

Azadeh Beiranvand, Seyed Mehdi Vahidipour

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Text-attributed graphs (TAGs) present unique challenges in representation learning by requiring models to capture both the semantic richness of node-associated texts and the structural dependencies of the graph. While graph neural networks (GNNs) excel at modeling topological information, they lack the capacity to process unstructured text. Conversely, large language models (LLMs) are proficient in text understanding but are typically unaware of graph structure. In this work, we propose BiGTex (Bidirectional Graph Text), a novel architecture that tightly integrates GNNs and LLMs through stacked Graph-Text Fusion Units. Each unit allows for mutual attention between textual and structural representations, enabling information to flow in both directions, text influencing structure and structure guiding textual interpretation. The proposed architecture is trained using parameter-efficient fine-tuning (LoRA), keeping the LLM frozen while adapting to task-specific signals. Extensive experiments on five benchmark datasets demonstrate that BiGTex achieves state-of-the-art performance in node classification and generalizes effectively to link prediction. An ablation study further highlights the importance of soft prompting and bi-directional attention in the model's success.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ogbn-arxivBiGTexNumber of params5,332,968Unverified

Reproductions