SOTAVerified

In-Context Learning

Papers

Showing 21512175 of 2297 papers

TitleStatusHype
Neuromorphic In-Context Learning for Energy-Efficient MIMO Symbol Detection0
New Evaluation Paradigm for Lexical Simplification0
Next-token pretraining implies in-context learning0
N-Gram Induction Heads for In-Context RL: Improving Stability and Reducing Data Needs0
No Free Lunch for Defending Against Prefilling Attack by In-Context Learning0
Not All Layers of LLMs Are Necessary During Inference0
Uncovering Model Processing Strategies with Non-Negative Per-Example Fisher Factorization0
NurValues: Real-World Nursing Values Evaluation for Large Language Models in Clinical Context0
O3D: Offline Data-driven Discovery and Distillation for Sequential Decision-Making with Large Language Models0
Octo-planner: On-device Language Model for Planner-Action Agents0
Off-the-shelf ChatGPT is a Good Few-shot Human Motion Predictor0
OmniActions: Predicting Digital Actions in Response to Real-World Multimodal Sensory Inputs with LLMs0
OmniRL: In-Context Reinforcement Learning by Large-Scale Meta-Training in Randomized Worlds0
On Automating Security Policies with Contemporary LLMs0
On-Chip Learning via Transformer In-Context Learning0
One controller to rule them all0
One-Layer Transformer Provably Learns One-Nearest Neighbor In Context0
One size doesn't fit all: Predicting the Number of Examples for In-Context Learning0
One Step of Gradient Descent is Provably the Optimal In-Context Learner with One Layer of Linear Self-Attention0
One Task Vector is not Enough: A Large-Scale Study for In-Context Learning0
On Giant's Shoulders: Effortless Weak to Strong by Dynamic Logits Fusion0
On Linear Representations and Pretraining Data Frequency in Language Models0
On Scaling Up a Multilingual Vision and Language Model0
On the Compositional Generalization Gap of In-Context Learning0
On the Effect of Pretraining Corpora on In-context Learning by a Large-scale Language Model0
Show:102550
← PrevPage 87 of 92Next →

No leaderboard results yet.