| BOLT: Bootstrap Long Chain-of-Thought in Language Models without Distillation | Feb 6, 2025 | In-Context LearningKnowledge Distillation | —Unverified | 0 |
| Short-length Adversarial Training Helps LLMs Defend Long-length Jailbreak Attacks: Theoretical and Empirical Evidence | Feb 6, 2025 | In-Context Learning | CodeCode Available | 0 |
| Analyzing limits for in-context learning | Feb 5, 2025 | In-Context Learning | —Unverified | 0 |
| Scalable In-Context Learning on Tabular Data via Retrieval-Augmented Large Language Models | Feb 5, 2025 | In-Context LearningRetrieval | —Unverified | 0 |
| OmniRL: In-Context Reinforcement Learning by Large-Scale Meta-Training in Randomized Worlds | Feb 5, 2025 | Few-Shot LearningImitation Learning | —Unverified | 0 |
| Enhancing Reasoning to Adapt Large Language Models for Domain-Specific Applications | Feb 5, 2025 | In-Context LearningLanguage Modeling | CodeCode Available | 1 |
| Is In-Context Universality Enough? MLPs are Also Universal In-Context | Feb 5, 2025 | In-Context LearningInductive Bias | —Unverified | 0 |
| Path Planning for Masked Diffusion Model Sampling | Feb 5, 2025 | Code GenerationIn-Context Learning | —Unverified | 0 |
| Transformers Boost the Performance of Decision Trees on Tabular Data across Sample Sizes | Feb 4, 2025 | In-Context LearningNatural Language Understanding | CodeCode Available | 1 |
| TabPFN Unleashed: A Scalable and Effective Solution to Tabular Classification Problems | Feb 4, 2025 | Computational EfficiencyIn-Context Learning | —Unverified | 0 |