| Pruning via Merging: Compressing LLMs via Manifold Alignment Based Layer Merging | Jun 24, 2024 | MMLUModel Compression | CodeCode Available | 1 | 5 |
| Instruction Tuning With Loss Over Instructions | May 23, 2024 | HumanEvalMMLU | CodeCode Available | 1 | 5 |
| Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design | Oct 24, 2024 | Mixture-of-ExpertsMMLU | CodeCode Available | 1 | 5 |
| Mobile-MMLU: A Mobile Intelligence Language Understanding Benchmark | Mar 26, 2025 | MMLUMultiple-choice | CodeCode Available | 1 | 5 |
| Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers | May 21, 2023 | MMLUZero-shot Generalization | CodeCode Available | 1 | 5 |
| A deeper look at depth pruning of LLMs | Jul 23, 2024 | MMLU | CodeCode Available | 1 | 5 |
| MyGO Multiplex CoT: A Method for Self-Reflection in Large Language Models via Double Chain of Thought Thinking | Jan 20, 2025 | Decision MakingGSM8K | CodeCode Available | 1 | 5 |
| Me, Myself, and AI: The Situational Awareness Dataset (SAD) for LLMs | Jul 5, 2024 | General KnowledgeInstruction Following | CodeCode Available | 1 | 5 |
| Bridging the Gap: Enhancing LLM Performance for Low-Resource African Languages with New Benchmarks, Fine-Tuning, and Cultural Adjustments | Dec 16, 2024 | Clinical KnowledgeCollege Medicine | CodeCode Available | 1 | 5 |
| ArcMMLU: A Library and Information Science Benchmark for Large Language Models | Nov 30, 2023 | MMLU | CodeCode Available | 1 | 5 |