SOTAVerified

In-Context Learning

Papers

Showing 176200 of 2297 papers

TitleStatusHype
LLM+MAP: Bimanual Robot Task Planning using Large Language Models and Planning Domain Definition LanguageCode1
JuDGE: Benchmarking Judgment Document Generation for Chinese Legal SystemCode1
ExDDV: A New Dataset for Explainable Deepfake Detection in VideoCode1
Efficient Many-Shot In-Context Learning with Dynamic Block-Sparse AttentionCode1
Strategy Coopetition Explains the Emergence and Transience of In-Context LearningCode1
Evaluating Knowledge Generation and Self-Refinement Strategies for LLM-based Column Type AnnotationCode1
Self-Training Elicits Concise Reasoning in Large Language ModelsCode1
FSPO: Few-Shot Preference Optimization of Synthetic Preference Data in LLMs Elicits Effective Personalization to Real UsersCode1
Multi-Perspective Data Augmentation for Few-shot Object DetectionCode1
Code Summarization Beyond Function LevelCode1
CoT-ICL Lab: A Petri Dish for Studying Chain-of-Thought Learning from In-Context DemonstrationsCode1
PEARL: Towards Permutation-Resilient LLMsCode1
Which Attention Heads Matter for In-Context Learning?Code1
Robust Adaptation of Large Multimodal Models for Retrieval Augmented Hateful Meme DetectionCode1
Understanding In-Context Machine Translation for Low-Resource Languages: A Case Study on ManchuCode1
BASE-SQL: A powerful open source Text-To-SQL baseline approachCode1
OntoTune: Ontology-Driven Self-training for Aligning Large Language ModelsCode1
LLM-Supported Natural Language to Bash TranslationCode1
Enhancing Reasoning to Adapt Large Language Models for Domain-Specific ApplicationsCode1
Transformers Boost the Performance of Decision Trees on Tabular Data across Sample SizesCode1
Can Transformers Learn Full Bayesian Inference in Context?Code1
AdaptiveLog: An Adaptive Log Analysis Framework with the Collaboration of Large and Small Language ModelCode1
A Study of In-Context-Learning-Based Text-to-SQL ErrorsCode1
BoostStep: Boosting mathematical capability of Large Language Models via improved single-step reasoningCode1
ImagineFSL: Self-Supervised Pretraining Matters on Imagined Base Set for VLM-based Few-shot LearningCode1
Show:102550
← PrevPage 8 of 92Next →

No leaderboard results yet.