SOTAVerified

In-Context Learning

Papers

Showing 22762297 of 2297 papers

TitleStatusHype
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?Code1
Semantic-Oriented Unlabeled Priming for Large-Scale Language Models0
MetaICL: Learning to Learn In Context0
Exploring Example Selection for Few-shot Text-to-SQL Semantic Parsing0
Learning To Retrieve Prompts for In-Context Learning0
Black-Box Tuning for Language-Model-as-a-ServiceCode2
Transformers Can Do Bayesian InferenceCode1
Learning To Retrieve Prompts for In-Context LearningCode1
GLaM: Efficient Scaling of Language Models with Mixture-of-Experts0
MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based FinetuningCode1
Semantic-Oriented Unlabeled Priming for Large-Scale Language Models0
Meta-learning via Language Model In-context Tuning0
Good Examples Make A Faster Learner: Simple Demonstration-based Learning for Low-resource NER0
An Explanation of In-context Learning as Implicit Bayesian InferenceCode1
MetaICL: Learning to Learn In ContextCode1
It's my Job to be Repetitive! My Job! My Job! -- Linking Repetitions to In-Context Learning in Language Models0
Good Examples Make A Faster Learner: Simple Demonstration-based Learning for Low-resource NERCode1
Meta-learning via Language Model In-context TuningCode1
LiST: Lite Prompted Self-training Makes Parameter-Efficient Few-shot LearnersCode1
The Inductive Bias of In-Context Learning: Rethinking Pretraining Example Design0
What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained TransformersCode2
PanGu-α: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel ComputationCode1
Show:102550
← PrevPage 92 of 92Next →

No leaderboard results yet.