| Unveiling the Secret Recipe: A Guide For Supervised Fine-Tuning Small LLMs | Dec 17, 2024 | MMLU | —Unverified | 0 |
| Bridging the Gap: Enhancing LLM Performance for Low-Resource African Languages with New Benchmarks, Fine-Tuning, and Cultural Adjustments | Dec 16, 2024 | Clinical KnowledgeCollege Medicine | CodeCode Available | 1 |
| Nanoscaling Floating-Point (NxFP): NanoMantissa, Adaptive Microexponents, and Code Recycling for Direct-Cast Compression of Large Language Models | Dec 15, 2024 | MMLUQuantization | —Unverified | 0 |
| Llama 3 Meets MoE: Efficient Upcycling | Dec 13, 2024 | Mixture-of-ExpertsMMLU | —Unverified | 0 |
| LLM Distillation for Efficient Few-Shot Multiple Choice Question Answering | Dec 13, 2024 | Few-Shot LearningKnowledge Distillation | —Unverified | 0 |
| HadaCore: Tensor Core Accelerated Hadamard Transform Kernel | Dec 12, 2024 | GPUMMLU | CodeCode Available | 3 |
| Global MMLU: Understanding and Addressing Cultural and Linguistic Biases in Multilingual Evaluation | Dec 4, 2024 | MMLU | —Unverified | 0 |
| Nemotron-CC: Transforming Common Crawl into a Refined Long-Horizon Pretraining Dataset | Dec 3, 2024 | ARCMMLU | —Unverified | 0 |
| The Vulnerability of Language Model Benchmarks: Do They Accurately Reflect True LLM Performance? | Dec 2, 2024 | Language ModelingLanguage Modelling | —Unverified | 0 |
| Noise Injection Reveals Hidden Capabilities of Sandbagging Language Models | Dec 2, 2024 | MMLUMultiple-choice | CodeCode Available | 0 |
| Improving Physics Reasoning in Large Language Models Using Mixture of Refinement Agents | Dec 1, 2024 | Mathematical ReasoningMMLU | —Unverified | 0 |
| Simple and Provable Scaling Laws for the Test-Time Compute of Large Language Models | Nov 29, 2024 | MMLU | —Unverified | 0 |
| Mixture of Cache-Conditional Experts for Efficient Mobile Device Inference | Nov 27, 2024 | GSM8KLanguage Modeling | —Unverified | 0 |
| Predicting Emergent Capabilities by Finetuning | Nov 25, 2024 | CoLAGSM8K | —Unverified | 0 |
| Learning from "Silly" Questions Improves Large Language Models, But Only Slightly | Nov 21, 2024 | EconometricsGlobal Facts | —Unverified | 0 |
| GenBFA: An Evolutionary Optimization Approach to Bit-Flip Attacks on LLMs | Nov 21, 2024 | MMLUText Generation | —Unverified | 0 |
| Real-time Adapting Routing (RAR): Improving Efficiency Through Continuous Learning in Software Powered by Layered Foundation Models | Nov 14, 2024 | Domain GeneralizationIn-Context Learning | —Unverified | 0 |
| Reasoning Robustness of LLMs to Adversarial Typographical Errors | Nov 8, 2024 | GSM8KMMLU | —Unverified | 0 |
| Watson: A Cognitive Observability Framework for the Reasoning of LLM-Powered Agents | Nov 5, 2024 | MMLU | —Unverified | 0 |
| TODO: Enhancing LLM Alignment with Ternary Preferences | Nov 2, 2024 | ARCMMLU | CodeCode Available | 0 |
| Project MPG: towards a generalized performance benchmark for LLM capabilities | Oct 28, 2024 | BenchmarkingChatbot | —Unverified | 0 |
| Shopping MMLU: A Massive Multi-Task Online Shopping Benchmark for Large Language Models | Oct 28, 2024 | Few-Shot LearningMMLU | CodeCode Available | 1 |
| Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design | Oct 24, 2024 | Mixture-of-ExpertsMMLU | CodeCode Available | 1 |
| LOGO -- Long cOntext aliGnment via efficient preference Optimization | Oct 24, 2024 | GPULanguage Modeling | CodeCode Available | 1 |
| Adaptive Dense Reward: Understanding the Gap Between Action and Reward Space in Alignment | Oct 23, 2024 | GSM8KHumanEval | —Unverified | 0 |