SOTAVerified

Continual Pretraining

Papers

Showing 5160 of 70 papers

TitleStatusHype
Understanding the RoPE Extensions of Long-Context LLMs: An Attention Perspective0
Cross-sensor self-supervised training and alignment for remote sensing0
Aurora-M: Open Source Continual Pre-training for Multilingual Language and Code0
DD-TIG at Constraint@ACL2022: Multimodal Understanding and Reasoning for Role Labeling of Entities in Hateful Memes0
DoPAMine: Domain-specific Pre-training Adaptation from seed-guided data Mining0
Efficient Domain-adaptive Continual Pretraining for the Process Industry in the German Language0
Enhance Mobile Agents Thinking Process Via Iterative Preference Learning0
Enhancing Domain-Specific Encoder Models with LLM-Generated Data: How to Leverage Ontologies, and How to Do Without Them0
Investigating Continual Pretraining in Large Language Models: Insights and Implications0
Is Domain Adaptation Worth Your Investment? Comparing BERT and FinBERT on Financial Tasks0
Show:102550
← PrevPage 6 of 7Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DASF1 (macro)0.69Unverified
#ModelMetricClaimedVerifiedStatus
1CPTF1 - macro63.77Unverified
#ModelMetricClaimedVerifiedStatus
1DASF1 (macro)0.71Unverified