SOTAVerified

Continual Pretraining

Papers

Showing 6170 of 70 papers

TitleStatusHype
LangSAMP: Language-Script Aware Multilingual PretrainingCode0
A Japanese Language Model and Three New Evaluation Benchmarks for Pharmaceutical NLPCode0
Robust Data Watermarking in Language Models by Injecting Fictitious KnowledgeCode0
RomanSetu: Efficiently unlocking multilingual capabilities of Large Language Models via RomanizationCode0
PECoP: Parameter Efficient Continual Pretraining for Action Quality AssessmentCode0
AF Adapter: Continual Pretraining for Building Chinese Biomedical Language ModelCode0
Simulating Training Data Leakage in Multiple-Choice Benchmarks for LLM EvaluationCode0
Fortunately, Discourse Markers Can Enhance Language Models for Sentiment AnalysisCode0
Alchemy: Amplifying Theorem-Proving Capability through Symbolic MutationCode0
Unsupervised Domain Adaptation for Sparse Retrieval by Filling Vocabulary and Word Frequency GapsCode0
Show:102550
← PrevPage 7 of 7Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DASF1 (macro)0.69Unverified
#ModelMetricClaimedVerifiedStatus
1CPTF1 - macro63.77Unverified
#ModelMetricClaimedVerifiedStatus
1DASF1 (macro)0.71Unverified