SOTAVerified

In-Context Learning

Papers

Showing 101125 of 2297 papers

TitleStatusHype
The Prompt is Mightier than the Example0
Unifying Attention Heads and Task Vectors via Hidden State Geometry in In-Context Learning0
Next-token pretraining implies in-context learning0
Token Reduction Should Go Beyond Efficiency in Generative Models -- From Vision, Language to MultimodalityCode3
Boosting In-Context Learning in LLMs Through the Lens of Classical Supervised Learning0
Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence0
Understanding Prompt Tuning and In-Context Learning via Meta-LearningCode0
An Empirical Study on Configuring In-Context Learning Demonstrations for Unleashing MLLMs' Sentimental Perception Capability0
IRONIC: Coherence-Aware Reasoning Chains for Multi-Modal Sarcasm DetectionCode0
Learning Beyond Limits: Multitask Learning and Synthetic Data for Low-Resource Canonical Morpheme Segmentation0
In-Context Watermarks for Large Language Models0
Unsupervised Prompting for Graph Neural Networks0
MAPLE: Many-Shot Adaptive Pseudo-Labeling for In-Context LearningCode0
Any Large Language Model Can Be a Reliable Judge: Debiasing with a Reasoning-based Bias Detector0
Agentic Feature Augmentation: Unifying Selection and Generation with Teaming, Planning, and Memories0
Meta-Learning an In-Context Transformer Model of Human Higher Visual Cortex0
The Atlas of In-Context Learning: How Attention Heads Shape In-Context Retrieval AugmentationCode1
Bridging Sign and Spoken Languages: Pseudo Gloss Generation for Sign Language Translation0
Large Language Models as Computable Approximations to Solomonoff Induction0
Reinforcing Question Answering Agents with Minimalist Policy Gradient Optimization0
In-Context Learning Boosts Speech Recognition via Human-like Adaptation to Speakers and Language Varieties0
Out-of-Distribution Generalization of In-Context Learning: A Low-Dimensional Subspace Perspective0
DSMentor: Enhancing Data Science Agents with Curriculum Learning and Online Knowledge Accumulation0
Adversarially Pretrained Transformers may be Universally Robust In-Context LearnersCode0
Mechanistic Fine-tuning for In-context Learning0
Show:102550
← PrevPage 5 of 92Next →

No leaderboard results yet.