SOTAVerified

Continual Pretraining

Papers

Showing 4150 of 70 papers

TitleStatusHype
LangSAMP: Language-Script Aware Multilingual PretrainingCode0
Towards Democratizing Multilingual Large Language Models For Medicine Through A Two-Stage Instruction Fine-tuning ApproachCode0
RedWhale: An Adapted Korean LLM Through Efficient Continual Pretraining0
Bilingual Adaptation of Monolingual Foundation Models0
70B-parameter large language models in Japanese medical question-answering0
Open Generative Large Language Models for Galician0
Understanding the RoPE Extensions of Long-Context LLMs: An Attention Perspective0
BAMBINO-LM: (Bilingual-)Human-Inspired Continual Pretraining of BabyLM0
LongSkywork: A Training Recipe for Efficiently Extending Context Length in Large Language Models0
Cross-sensor self-supervised training and alignment for remote sensing0
Show:102550
← PrevPage 5 of 7Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DASF1 (macro)0.69Unverified
#ModelMetricClaimedVerifiedStatus
1CPTF1 - macro63.77Unverified
#ModelMetricClaimedVerifiedStatus
1DASF1 (macro)0.71Unverified