SOTAVerified

In-Context Learning

Papers

Showing 651675 of 2297 papers

TitleStatusHype
Complementary Explanations for Effective In-Context LearningCode0
Large Language Models Are Partially Primed in Pronoun InterpretationCode0
Larger Language Models Don't Care How You Think: Why Chain-of-Thought Prompting Fails in Subjective TasksCode0
LinkNER: Linking Local Named Entity Recognition Models to Large Language Models using UncertaintyCode0
Med-PerSAM: One-Shot Visual Prompt Tuning for Personalized Segment Anything Model in Medical DomainCode0
OverPrompt: Enhancing ChatGPT through Efficient In-Context LearningCode0
Competition Dynamics Shape Algorithmic Phases of In-Context LearningCode0
AutoHint: Automatic Prompt Optimization with Hint GenerationCode0
Language Models Can See Better: Visual Contrastive Decoding For LLM Multimodal ReasoningCode0
Large Language Models are Biased Reinforcement LearnersCode0
Language Models are Better Bug Detector Through Code-Pair ClassificationCode0
Few-shot Transfer Learning for Knowledge Base Question Answering: Fusing Supervised Models with In-Context LearningCode0
Language Models can Exploit Cross-Task In-context Learning for Data-Scarce Novel TasksCode0
AdaptEval: Evaluating Large Language Models on Domain Adaptation for Text SummarizationCode0
KwaiChat: A Large-Scale Video-Driven Multilingual Mixed-Type Dialogue CorpusCode0
LaiDA: Linguistics-aware In-context Learning with Data Augmentation for Metaphor Components IdentificationCode0
Increasing Probability Mass on Answer Choices Does Not Always Improve AccuracyCode0
AlpaPICO: Extraction of PICO Frames from Clinical Trial Documents Using LLMsCode0
Iterative Forward Tuning Boosts In-Context Learning in Language ModelsCode0
CIE: Controlling Language Model Text Generations Using Continuous SignalsCode0
CICLe: Conformal In-Context Learning for Largescale Multi-Class Food Risk ClassificationCode0
JMI at SemEval 2024 Task 3: Two-step approach for multimodal ECAC using in-context learning with GPT and instruction-tuned Llama modelsCode0
Is In-Context Learning in Large Language Models Bayesian? A Martingale PerspectiveCode0
A text-to-tabular approach to generate synthetic patient data using LLMsCode0
Bridging Information Gaps in Dialogues With Grounded Exchanges Using Knowledge GraphsCode0
Show:102550
← PrevPage 27 of 92Next →

No leaderboard results yet.