SOTAVerified

SplaXBERT: Leveraging Mixed Precision Training and Context Splitting for Question Answering

2024-12-07Unverified0· sign in to hype

Zhu Yufan, Hao Zeyu, Li Siqi, Niu Boqian

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

SplaXBERT, built on ALBERT-xlarge with context-splitting and mixed precision training, achieves high efficiency in question-answering tasks on lengthy texts. Tested on SQuAD v1.1, it attains an Exact Match of 85.95% and an F1 Score of 92.97%, outperforming traditional BERT-based models in both accuracy and resource efficiency.

Tasks

Reproductions