SOTAVerified

In-Context Learning

Papers

Showing 351400 of 2297 papers

TitleStatusHype
Instruction Induction: From Few Examples to Natural Language Task DescriptionsCode1
Evolving Prompts In-Context: An Open-ended, Self-replicating PerspectiveCode1
Evolutionary Prompt Design for LLM-Based Post-ASR Error CorrectionCode1
Few-shot In-context Learning for Knowledge Base Question AnsweringCode1
All in an Aggregated Image for In-Image LearningCode1
Learning to Retrieve In-Context Examples for Large Language ModelsCode1
ExDDV: A New Dataset for Explainable Deepfake Detection in VideoCode1
Can Language Models Solve Graph Problems in Natural Language?Code1
Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?Code1
Explore In-Context Learning for 3D Point Cloud UnderstandingCode1
In-Context Learning with Iterative Demonstration SelectionCode1
Leveraging Large Language Models to Generate Answer Set ProgramsCode1
Attack Prompt Generation for Red Teaming and Defending Large Language ModelsCode1
Code Prompting Elicits Conditional Reasoning Abilities in Text+Code LLMsCode1
Exploring Diverse In-Context Configurations for Image CaptioningCode1
Code-Style In-Context Learning for Knowledge-Based Question AnsweringCode1
Diverse Demonstrations Improve In-context Compositional GeneralizationCode1
Artificial Generational Intelligence: Cultural Accumulation in Reinforcement LearningCode1
Do Large Language Models Know What They Don't Know?Code1
Cognitive Overload Attack:Prompt Injection for Long ContextCode1
In-Context Learning with Many Demonstration ExamplesCode1
Instruct Me More! Random Prompting for Visual In-Context LearningCode1
ExPT: Synthetic Pretraining for Few-Shot Experimental DesignCode1
Few-shot Fine-tuning vs. In-context Learning: A Fair Comparison and EvaluationCode1
Diffusion Language Models Can Perform Many Tasks with Scaling and Instruction-FinetuningCode1
DiLM: Distilling Dataset into Language Model for Text-level Dataset DistillationCode1
Can Foundation Models Help Us Achieve Perfect Secrecy?Code1
Fact-Checking Complex Claims with Program-Guided ReasoningCode1
In-Context Learning State Vector with Inner and Momentum OptimizationCode1
ArchCode: Incorporating Software Requirements in Code Generation with Large Language ModelsCode1
From system models to class models: An in-context learning paradigmCode1
DIALIGHT: Lightweight Multilingual Development and Evaluation of Task-Oriented Dialogue Systems with Large Language ModelsCode1
Are Large Language Models Temporally Grounded?Code1
Why So Gullible? Enhancing the Robustness of Retrieval-Augmented Models against Counterfactual NoiseCode1
In-Context Learning May Not Elicit Trustworthy Reasoning: A-Not-B Errors in Pretrained Language ModelsCode1
In-Context Learning User Simulators for Task-Oriented Dialog SystemsCode1
kNN Prompting: Beyond-Context Learning with Calibration-Free Nearest Neighbor InferenceCode1
In-Context Learning Creates Task VectorsCode1
DeID-GPT: Zero-shot Medical Text De-Identification by GPT-4Code1
Fine-tuning Large Language Models for Adaptive Machine TranslationCode1
In-Context Freeze-Thaw Bayesian Optimization for Hyperparameter OptimizationCode1
Fool Your (Vision and) Language Model With Embarrassingly Simple PermutationsCode1
In-Context Learning Demonstration Selection via Influence AnalysisCode1
CABINET: Content Relevance based Noise Reduction for Table Question AnsweringCode1
In-Context Explainers: Harnessing LLMs for Explaining Black Box ModelsCode1
Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training DataCode1
FreeAL: Towards Human-Free Active Learning in the Era of Large Language ModelsCode1
In-Context Ensemble Learning from Pseudo Labels Improves Video-Language Models for Low-Level Workflow UnderstandingCode1
C3PO: Critical-Layer, Core-Expert, Collaborative Pathway Optimization for Test-Time Expert Re-MixingCode1
ByteSized32: A Corpus and Challenge Task for Generating Task-Specific World Models Expressed as Text GamesCode1
Show:102550
← PrevPage 8 of 46Next →

No leaderboard results yet.