| Review-Instruct: A Review-Driven Multi-Turn Conversations Generation Method for Large Language Models | May 16, 2025 | DiversityMMLU | CodeCode Available | 0 |
| Mining Hidden Thoughts from Texts: Evaluating Continual Pretraining with Synthetic Data for LLM Reasoning | May 15, 2025 | Continual PretrainingMMLU | —Unverified | 0 |
| KRISTEVA: Close Reading as a Novel Task for Benchmarking Interpretive Reasoning | May 14, 2025 | BenchmarkingMMLU | —Unverified | 0 |
| AttentionInfluence: Adopting Attention Head Influence for Weak-to-Strong Pretraining Data Selection | May 12, 2025 | GSM8KHumanEval | —Unverified | 0 |
| SEM: Reinforcement Learning for Search-Efficient Large Language Models | May 12, 2025 | MMLUreinforcement-learning | —Unverified | 0 |
| A Scaling Law for Token Efficiency in LLM Fine-Tuning Under Fixed Compute Budgets | May 9, 2025 | MMLU | —Unverified | 0 |
| LLMs Outperform Experts on Challenging Biology Benchmarks | May 9, 2025 | MMLUVirology | —Unverified | 0 |
| Elastic Weight Consolidation for Full-Parameter Continual Pre-Training of Gemma2 | May 9, 2025 | ARCBelebele | —Unverified | 0 |
| Measuring Hong Kong Massive Multi-Task Language Understanding | May 4, 2025 | MMLUMulti-task Language Understanding | —Unverified | 0 |
| Memory-Efficient LLM Training by Various-Grained Low-Rank Projection of Gradients | May 3, 2025 | GSM8KMMLU | —Unverified | 0 |