SOTAVerified

In-Context Learning

Papers

Showing 101150 of 2297 papers

TitleStatusHype
The Prompt is Mightier than the Example0
Unifying Attention Heads and Task Vectors via Hidden State Geometry in In-Context Learning0
Token Reduction Should Go Beyond Efficiency in Generative Models -- From Vision, Language to MultimodalityCode3
Next-token pretraining implies in-context learning0
Boosting In-Context Learning in LLMs Through the Lens of Classical Supervised Learning0
An Empirical Study on Configuring In-Context Learning Demonstrations for Unleashing MLLMs' Sentimental Perception Capability0
Learning Beyond Limits: Multitask Learning and Synthetic Data for Low-Resource Canonical Morpheme Segmentation0
Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence0
MAPLE: Many-Shot Adaptive Pseudo-Labeling for In-Context LearningCode0
IRONIC: Coherence-Aware Reasoning Chains for Multi-Modal Sarcasm DetectionCode0
Unsupervised Prompting for Graph Neural Networks0
In-Context Watermarks for Large Language Models0
Understanding Prompt Tuning and In-Context Learning via Meta-LearningCode0
Any Large Language Model Can Be a Reliable Judge: Debiasing with a Reasoning-based Bias Detector0
Agentic Feature Augmentation: Unifying Selection and Generation with Teaming, Planning, and Memories0
The Atlas of In-Context Learning: How Attention Heads Shape In-Context Retrieval AugmentationCode1
Meta-Learning an In-Context Transformer Model of Human Higher Visual Cortex0
Bridging Sign and Spoken Languages: Pseudo Gloss Generation for Sign Language Translation0
Large Language Models as Computable Approximations to Solomonoff Induction0
Reinforcing Question Answering Agents with Minimalist Policy Gradient Optimization0
In-Context Learning Boosts Speech Recognition via Human-like Adaptation to Speakers and Language Varieties0
Out-of-Distribution Generalization of In-Context Learning: A Low-Dimensional Subspace Perspective0
DSMentor: Enhancing Data Science Agents with Curriculum Learning and Online Knowledge Accumulation0
Adversarially Pretrained Transformers may be Universally Robust In-Context LearnersCode0
Mechanistic Fine-tuning for In-context Learning0
QA-prompting: Improving Summarization with Large Language Models using Question-AnsweringCode0
Reasoning Models Better Express Their ConfidenceCode1
EmoGist: Efficient In-Context Learning for Visual Emotion Understanding0
Causal Head Gating: A Framework for Interpreting Roles of Attention Heads in Transformers0
AutoMathKG: The automated mathematical knowledge graph based on LLM and vector database0
FinePhys: Fine-grained Human Action Generation by Explicitly Incorporating Physical Laws for Effective Skeletal Guidance0
CIE: Controlling Language Model Text Generations Using Continuous SignalsCode0
Improving LLM Outputs Against Jailbreak Attacks with Expert Model Integration0
Data Whisperer: Efficient Data Selection for Task-Specific LLM Fine-Tuning via Few-Shot In-Context LearningCode1
Relation Extraction or Pattern Matching? Unravelling the Generalisation Limits of Language Models for Biographical RECode1
Bridging Generative and Discriminative Learning: Few-Shot Relation Extraction via Two-Stage Knowledge-Guided Pre-trainingCode0
Induction Head Toxicity Mechanistically Explains Repetition Curse in Large Language Models0
Do different prompting methods yield a common task representation in language models?0
Transformer learns the cross-task prior and regularization for in-context learning0
Transformers as Unsupervised Learning Algorithms: A study on Gaussian MixturesCode0
Feasibility with Language Models for Open-World Compositional Zero-Shot Learning0
When Thinking Fails: The Pitfalls of Reasoning for Instruction-Following in LLMs0
Context parroting: A simple but tough-to-beat baseline for foundation models in scientific machine learning0
Illusion or Algorithm? Investigating Memorization, Emergence, and Symbolic Processing in In-Context LearningCode0
PIG: Privacy Jailbreak Attack on LLMs via Gradient-based Iterative In-Context OptimizationCode1
Predictability Shapes Adaptation: An Evolutionary Perspective on Modes of Learning in Transformers0
A Survey on Large Language Models in Multimodal Recommender Systems0
Towards Fair In-Context Learning with Tabular Foundation ModelsCode0
Tests as Prompt: A Test-Driven-Development Benchmark for LLM Code Generation0
Automated Meta Prompt Engineering for Alignment with the Theory of Mind0
Show:102550
← PrevPage 3 of 46Next →

No leaderboard results yet.