SOTAVerified

Continual Pretraining

Papers

Showing 6170 of 70 papers

TitleStatusHype
CEM: A Data-Efficient Method for Large Language Models to Continue Evolving From Mistakes0
Lifelong Pretraining: Continually Adapting Language Models to Emerging Corpora0
LLaVA-c: Continual Improved Visual Instruction Tuning0
LongSkywork: A Training Recipe for Efficiently Extending Context Length in Large Language Models0
Mining Hidden Thoughts from Texts: Evaluating Continual Pretraining with Synthetic Data for LLM Reasoning0
AstroMLab 2: AstroLLaMA-2-70B Model and Benchmarking Specialised LLMs for Astronomy0
Multilingual Machine Translation with Open Large Language Models at Practical Scale: An Empirical Study0
On the Robustness of Reading Comprehension Models to Entity Renaming0
Open Generative Large Language Models for Galician0
Overcoming Vocabulary Mismatch: Vocabulary-agnostic Teacher Guided Language Modeling0
Show:102550
← PrevPage 7 of 7Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DASF1 (macro)0.69Unverified
#ModelMetricClaimedVerifiedStatus
1CPTF1 - macro63.77Unverified
#ModelMetricClaimedVerifiedStatus
1DASF1 (macro)0.71Unverified