rLLM: Relational Table Learning with LLMs
Weichen Li, Xiaotong Huang, Jianwu Zheng, Zheng Wang, Chaokun Wang, Li Pan, Jianhua Li
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/rllm-project/rllm_datasetsOfficialnone★ 8
- github.com/rllm-project/rllmIn paperpytorch★ 440
Abstract
We introduce rLLM (relationLLM), a PyTorch library designed for Relational Table Learning (RTL) with Large Language Models (LLMs). The core idea is to decompose state-of-the-art Graph Neural Networks, LLMs, and Table Neural Networks into standardized modules, to enable the fast construction of novel RTL-type models in a simple "combine, align, and co-train" manner. To illustrate the usage of rLLM, we introduce a simple RTL method named BRIDGE. Additionally, we present three novel relational tabular datasets (TML1M, TLF2K, and TACM12K) by enhancing classic datasets. We hope rLLM can serve as a useful and easy-to-use development framework for RTL-related tasks. Our code is available at: https://github.com/rllm-project/rllm.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| TACM12K | BRIDGE | Accuracy | 25.6 | — | Unverified |
| TLF2K | BRIDGE | Accuracy | 42.2 | — | Unverified |
| TML1M | BRIDGE | Accuracy | 36.2 | — | Unverified |