SOTAVerified

Continual Pretraining

Papers

Showing 2650 of 70 papers

TitleStatusHype
Simulating Training Data Leakage in Multiple-Choice Benchmarks for LLM EvaluationCode0
A Japanese Language Model and Three New Evaluation Benchmarks for Pharmaceutical NLPCode0
Enhance Mobile Agents Thinking Process Via Iterative Preference Learning0
Mining Hidden Thoughts from Texts: Evaluating Continual Pretraining with Synthetic Data for LLM Reasoning0
Efficient Domain-adaptive Continual Pretraining for the Process Industry in the German Language0
Enhancing Domain-Specific Encoder Models with LLM-Generated Data: How to Leverage Ontologies, and How to Do Without Them0
Overcoming Vocabulary Mismatch: Vocabulary-agnostic Teacher Guided Language Modeling0
AfroXLMR-Social: Adapting Pre-trained Language Models for African Languages Social Media Text0
Robust Data Watermarking in Language Models by Injecting Fictitious KnowledgeCode0
Multilingual Machine Translation with Open Large Language Models at Practical Scale: An Empirical Study0
Breaking the Stage Barrier: A Novel Single-Stage Approach to Long Context Extension for Large Language Models0
Alchemy: Amplifying Theorem-Proving Capability through Symbolic MutationCode0
The Construction of Instruction-tuned LLMs for Finance without Instruction Data Using Continual Pretraining and Model Merging0
DoPAMine: Domain-specific Pre-training Adaptation from seed-guided data Mining0
AstroMLab 2: AstroLLaMA-2-70B Model and Benchmarking Specialised LLMs for Astronomy0
LangSAMP: Language-Script Aware Multilingual PretrainingCode0
Towards Democratizing Multilingual Large Language Models For Medicine Through A Two-Stage Instruction Fine-tuning ApproachCode0
RedWhale: An Adapted Korean LLM Through Efficient Continual Pretraining0
Bilingual Adaptation of Monolingual Foundation Models0
70B-parameter large language models in Japanese medical question-answering0
Open Generative Large Language Models for Galician0
Understanding the RoPE Extensions of Long-Context LLMs: An Attention Perspective0
BAMBINO-LM: (Bilingual-)Human-Inspired Continual Pretraining of BabyLM0
LongSkywork: A Training Recipe for Efficiently Extending Context Length in Large Language Models0
Cross-sensor self-supervised training and alignment for remote sensing0
Show:102550
← PrevPage 2 of 3Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DASF1 (macro)0.69Unverified
#ModelMetricClaimedVerifiedStatus
1CPTF1 - macro63.77Unverified
#ModelMetricClaimedVerifiedStatus
1DASF1 (macro)0.71Unverified