SOTAVerified

In-Context Learning

Papers

Showing 22512297 of 2297 papers

TitleStatusHype
Exploring Length Generalization in Large Language Models0
TabPFN: A Transformer That Solves Small Tabular Classification Problems in a SecondCode5
Rationale-Augmented Ensembles in Language Models0
Self-Generated In-Context Learning: Leveraging Auto-regressive Language Models as a Demonstration Generator0
Language Models are General-Purpose Interfaces0
Automatic Short Math Answer Grading via In-context Meta-learningCode0
Can Foundation Models Help Us Achieve Perfect Secrecy?Code1
Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations0
Evaluating the Impact of Model Scale for Compositional Generalization in Semantic Parsing0
Instruction Induction: From Few Examples to Natural Language Task DescriptionsCode1
Prototypical Calibration for Few-shot Learning of Language ModelsCode0
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context LearningCode4
UL2: Unifying Language Learning ParadigmsCode1
The Unreliability of Explanations in Few-shot Prompting for Textual ReasoningCode1
Contrastive Learning for Prompt-Based Few-Shot Language LearnersCode1
Exploiting Language Model Prompts Using Similarity Measures: A Case Study on the Word-in-Context Task0
What Makes Good In-Context Examples for GPT-3?0
On the Effect of Pretraining Corpora on In-context Learning by a Large-scale Language Model0
Super-Prompting: Utilizing Model-Independent Contextual Data to Reduce Data Annotation Required in Visual Commonsense Tasks0
Data Distributional Properties Drive Emergent In-Context Learning in TransformersCode1
Can language models learn from explanations in context?0
Leveraging pre-trained language models for conversational information seeking from text0
In-Context Learning for Few-Shot Dialogue State TrackingCode1
Thinking about GPT-3 In-Context Learning for Biomedical IE? Think AgainCode1
Efficient Language Modeling with Sparse all-MLP0
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?Code1
Semantic-Oriented Unlabeled Priming for Large-Scale Language Models0
MetaICL: Learning to Learn In Context0
Exploring Example Selection for Few-shot Text-to-SQL Semantic Parsing0
Learning To Retrieve Prompts for In-Context Learning0
Black-Box Tuning for Language-Model-as-a-ServiceCode2
Transformers Can Do Bayesian InferenceCode1
Learning To Retrieve Prompts for In-Context LearningCode1
GLaM: Efficient Scaling of Language Models with Mixture-of-Experts0
MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based FinetuningCode1
Semantic-Oriented Unlabeled Priming for Large-Scale Language Models0
Meta-learning via Language Model In-context Tuning0
Good Examples Make A Faster Learner: Simple Demonstration-based Learning for Low-resource NER0
An Explanation of In-context Learning as Implicit Bayesian InferenceCode1
MetaICL: Learning to Learn In ContextCode1
It's my Job to be Repetitive! My Job! My Job! -- Linking Repetitions to In-Context Learning in Language Models0
Good Examples Make A Faster Learner: Simple Demonstration-based Learning for Low-resource NERCode1
Meta-learning via Language Model In-context TuningCode1
LiST: Lite Prompted Self-training Makes Parameter-Efficient Few-shot LearnersCode1
The Inductive Bias of In-Context Learning: Rethinking Pretraining Example Design0
What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained TransformersCode2
PanGu-α: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel ComputationCode1
Show:102550
← PrevPage 46 of 46Next →

No leaderboard results yet.