SOTAVerified

A Split-then-Join Approach to Abstractive Summarization for Very Long Documents in a Low Resource Setting

2025-05-11Code Available0· sign in to hype

Lhuqita Fazry

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

BIGBIRD-PEGASUS model achieves state-of-the-art on abstractive text summarization for long documents. However it's capacity still limited to maximum of 4,096 tokens, thus caused performance degradation on summarization for very long documents. Common method to deal with the issue is to truncate the documents. In this reasearch, we'll use different approach. We'll use the pretrained BIGBIRD-PEGASUS model by fine tuned the model on other domain dataset. First, we filter out all documents which length less than 20,000 tokens to focus on very long documents. To prevent domain shifting problem and overfitting on transfer learning due to small dataset, we augment the dataset by splitting document-summary training pair into parts, to fit the document into 4,096 tokens. Source code available on https://github.com/lhfazry/SPIN-summhttps://github.com/lhfazry/SPIN-summ.

Tasks

Reproductions