| CEM: A Data-Efficient Method for Large Language Models to Continue Evolving From Mistakes | Apr 11, 2024 | Continual LearningContinual Pretraining | —Unverified | 0 |
| Lifelong Pretraining: Continually Adapting Language Models to Emerging Corpora | Oct 16, 2021 | Continual LearningContinual Pretraining | —Unverified | 0 |
| LLaVA-c: Continual Improved Visual Instruction Tuning | Jun 10, 2025 | Continual LearningContinual Pretraining | —Unverified | 0 |
| LongSkywork: A Training Recipe for Efficiently Extending Context Length in Large Language Models | Jun 2, 2024 | Continual PretrainingInformation Retrieval | —Unverified | 0 |
| Mining Hidden Thoughts from Texts: Evaluating Continual Pretraining with Synthetic Data for LLM Reasoning | May 15, 2025 | Continual PretrainingMMLU | —Unverified | 0 |
| AstroMLab 2: AstroLLaMA-2-70B Model and Benchmarking Specialised LLMs for Astronomy | Sep 29, 2024 | AstronomyBenchmarking | —Unverified | 0 |
| Multilingual Machine Translation with Open Large Language Models at Practical Scale: An Empirical Study | Feb 4, 2025 | Continual PretrainingMachine Translation | —Unverified | 0 |
| Hierarchical Label-wise Attention Transformer Model for Explainable ICD Coding | Apr 22, 2022 | Continual Pretraining | CodeCode Available | 0 |
| Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing | Jul 31, 2020 | Continual Pretraining | CodeCode Available | 0 |
| Towards Democratizing Multilingual Large Language Models For Medicine Through A Two-Stage Instruction Fine-tuning Approach | Sep 9, 2024 | Computational EfficiencyContinual Pretraining | CodeCode Available | 0 |
| LangSAMP: Language-Script Aware Multilingual Pretraining | Sep 26, 2024 | Continual PretrainingLanguage Modeling | CodeCode Available | 0 |
| A Japanese Language Model and Three New Evaluation Benchmarks for Pharmaceutical NLP | May 22, 2025 | Continual PretrainingDiagnostic | CodeCode Available | 0 |
| Robust Data Watermarking in Language Models by Injecting Fictitious Knowledge | Mar 6, 2025 | Continual PretrainingMemorization | CodeCode Available | 0 |
| RomanSetu: Efficiently unlocking multilingual capabilities of Large Language Models via Romanization | Jan 25, 2024 | Continual PretrainingSentiment Analysis | CodeCode Available | 0 |
| PECoP: Parameter Efficient Continual Pretraining for Action Quality Assessment | Nov 11, 2023 | Action Quality AssessmentContinual Pretraining | CodeCode Available | 0 |
| AF Adapter: Continual Pretraining for Building Chinese Biomedical Language Model | Nov 21, 2022 | Continual PretrainingLanguage Modeling | CodeCode Available | 0 |
| Simulating Training Data Leakage in Multiple-Choice Benchmarks for LLM Evaluation | May 30, 2025 | Continual PretrainingFairness | CodeCode Available | 0 |
| Fortunately, Discourse Markers Can Enhance Language Models for Sentiment Analysis | Jan 6, 2022 | Continual PretrainingSentiment Analysis | CodeCode Available | 0 |
| Alchemy: Amplifying Theorem-Proving Capability through Symbolic Mutation | Oct 21, 2024 | Automated Theorem ProvingContinual Pretraining | CodeCode Available | 0 |
| Unsupervised Domain Adaptation for Sparse Retrieval by Filling Vocabulary and Word Frequency Gaps | Nov 8, 2022 | Continual PretrainingDomain Adaptation | CodeCode Available | 0 |