SOTAVerified

Continual Pretraining

Papers

Showing 1120 of 70 papers

TitleStatusHype
AfroXLMR-Social: Adapting Pre-trained Language Models for African Languages Social Media Text0
Robust Data Watermarking in Language Models by Injecting Fictitious KnowledgeCode0
Multilingual Machine Translation with Open Large Language Models at Practical Scale: An Empirical Study0
Demystifying Domain-adaptive Post-training for Financial LLMsCode1
NyayaAnumana & INLegalLlama: The Largest Indian Legal Judgment Prediction Dataset and Specialized Language Model for Enhanced Decision AnalysisCode1
Breaking the Stage Barrier: A Novel Single-Stage Approach to Long Context Extension for Large Language Models0
Alchemy: Amplifying Theorem-Proving Capability through Symbolic MutationCode0
DoPAMine: Domain-specific Pre-training Adaptation from seed-guided data Mining0
The Construction of Instruction-tuned LLMs for Finance without Instruction Data Using Continual Pretraining and Model Merging0
AstroMLab 2: AstroLLaMA-2-70B Model and Benchmarking Specialised LLMs for Astronomy0
Show:102550
← PrevPage 2 of 7Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DASF1 (macro)0.69Unverified
#ModelMetricClaimedVerifiedStatus
1CPTF1 - macro63.77Unverified
#ModelMetricClaimedVerifiedStatus
1DASF1 (macro)0.71Unverified