| OpenGrok: Enhancing SNS Data Processing with Distilled Knowledge and Mask-like Mechanisms | Feb 11, 2025 | Knowledge DistillationMMLU | CodeCode Available | 0 |
| RoToR: Towards More Reliable Responses for Order-Invariant Inputs | Feb 10, 2025 | Graph Question AnsweringMMLU | CodeCode Available | 0 |
| Tokenization Standards for Linguistic Integrity: Turkish as a Benchmark | Feb 10, 2025 | MMLUMorphological Analysis | —Unverified | 0 |
| LM2: Large Memory Models | Feb 9, 2025 | DecoderMMLU | CodeCode Available | 1 |
| FRAMES: Boosting LLMs with A Four-Quadrant Multi-Stage Pretraining Strategy | Feb 8, 2025 | MMLU | —Unverified | 0 |
| Adapt-Pruner: Adaptive Structural Pruning for Efficient Small Language Model Training | Feb 5, 2025 | Language ModelingLanguage Modelling | —Unverified | 0 |
| Evaluation of Large Language Models via Coupled Token Generation | Feb 3, 2025 | ChatbotLarge Language Model | CodeCode Available | 0 |
| QLESS: A Quantized Approach for Data Valuation and Selection in Large Language Model Fine-Tuning | Feb 3, 2025 | Data ValuationLanguage Modeling | CodeCode Available | 0 |
| Rethinking Mixture-of-Agents: Is Mixing Different Large Language Models Beneficial? | Feb 2, 2025 | MathMMLU | —Unverified | 0 |
| LLM-Powered Benchmark Factory: Reliable, Generic, and Efficient | Feb 2, 2025 | MMLU | CodeCode Available | 0 |