| Simulating Training Data Leakage in Multiple-Choice Benchmarks for LLM Evaluation | May 30, 2025 | Continual PretrainingFairness | CodeCode Available | 0 |
| Self-Reasoning Language Models: Unfold Hidden Reasoning Chains with Few Reasoning Catalyst | May 20, 2025 | ARCGSM8K | —Unverified | 0 |
| Elastic Weight Consolidation for Full-Parameter Continual Pre-Training of Gemma2 | May 9, 2025 | ARCBelebele | —Unverified | 0 |
| Domain-Adaptive Continued Pre-Training of Small Language Models | Apr 13, 2025 | Domain AdaptationHellaSwag | —Unverified | 0 |
| What the HellaSwag? On the Validity of Common-Sense Reasoning Benchmarks | Apr 10, 2025 | Common Sense ReasoningHellaSwag | CodeCode Available | 0 |
| More is Less: The Pitfalls of Multi-Model Synthetic Preference Data in DPO Safety Alignment | Apr 3, 2025 | ARCHellaSwag | —Unverified | 0 |
| Obliviate: Efficient Unmemorization for Protecting Intellectual Property in Large Language Models | Feb 20, 2025 | HellaSwagMemorization | —Unverified | 0 |
| HellaSwag-Pro: A Large-Scale Bilingual Benchmark for Evaluating the Robustness of LLMs in Commonsense Reasoning | Feb 17, 2025 | HellaSwag | —Unverified | 0 |
| FinerWeb-10BT: Refining Web Data with LLM-Based Line-Level Filtering | Jan 13, 2025 | DescriptiveHellaSwag | CodeCode Available | 0 |
| SmolTulu: Higher Learning Rate to Batch Size Ratios Can Lead to Better Reasoning in SLMs | Dec 11, 2024 | ARCGSM8K | —Unverified | 0 |