SOTAVerified

In-Context Learning

Papers

Showing 11511200 of 2297 papers

TitleStatusHype
Multi-modal Generation via Cross-Modal In-Context LearningCode0
Knowledge Circuits in Pretrained TransformersCode2
FlashST: A Simple and Universal Prompt-Tuning Framework for Traffic PredictionCode2
Benchmarks Underestimate the Readiness of Multi-lingual Dialogue Agents0
IM-Context: In-Context Learning for Imbalanced Regression TasksCode0
Aligning LLMs through Multi-perspective User Preference Ranking-based Feedback for Programming Question Answering0
NoteLLM-2: Multimodal Large Representation Models for RecommendationCode2
RAGSys: Item-Cold-Start Recommender as RAG System0
On Mesa-Optimization in Autoregressively Trained Transformers: Emergence and CapabilityCode0
ARC: A Generalist Graph Anomaly Detector with In-Context LearningCode1
Unifying Demonstration Selection and Compression for In-Context Learning0
Automatic Domain Adaptation by Transformers in In-Context Learning0
On Understanding Attention-Based In-Context Learning for Categorical Data0
SelfCP: Compressing Over-Limit Prompt via the Frozen Large Language Model Itself0
On the Noise Robustness of In-Context Learning for Text GenerationCode0
Benchmarking General-Purpose In-Context Learning0
Mixture of In-Context Prompters for Tabular PFNs0
Unsupervised Meta-Learning via In-Context Learning0
Prompt Optimization with EASE? Efficient Ordering-aware Automated Selection of ExemplarsCode0
Learning to Reason via Program Generation, Emulation, and SearchCode0
Evaluating and Safeguarding the Adversarial Robustness of Retrieval-Based In-Context LearningCode0
Before Generation, Align it! A Novel and Effective Strategy for Mitigating Hallucinations in Text-to-SQL GenerationCode2
Synergizing In-context Learning with Hints for End-to-end Task-oriented Dialog Systems0
Towards Better Understanding of In-Context Learning Ability from In-Context Uncertainty Quantification0
Off-the-shelf ChatGPT is a Good Few-shot Human Motion Predictor0
Towards Global Optimal Visual In-Context Learning Prompt Selection0
MLPs Learn In-Context on Regression and Classification TasksCode1
Learning Beyond Pattern Matching? Assaying Mathematical Understanding in LLMs0
Linking In-context Learning in Transformers to Human Episodic MemoryCode0
In-context Time Series Predictor0
Evaluating Large Language Models for Public Health Classification and Extraction Tasks0
Emotion Identification for French in Written Texts: Considering their Modes of Expression as a Step Towards Text Complexity Analysis0
Implicit In-context LearningCode1
Let's Fuse Step by Step: A Generative Fusion Decoding Algorithm with LLMs for Multi-modal Text RecognitionCode2
DETAIL: Task DEmonsTration Attribution for Interpretable In-context LearningCode0
Fine-tuned In-Context Learning Transformers are Excellent Tabular Data ClassifiersCode2
Transformers Learn Temporal Difference Methods for In-Context Reinforcement Learning0
Comparative Analysis of Different Efficient Fine Tuning Methods of Large Language Models (LLMs) in Low-Resource Setting0
Quantifying Semantic Emergence in Language ModelsCode0
Asymptotic theory of in-context learning by linear attentionCode0
Adapting Large Multimodal Models to Distribution Shifts: The Role of In-Context LearningCode0
Effective In-Context Example Selection through Data Compression0
MAML-en-LLM: Model Agnostic Meta-Training of LLMs for Improved In-Context Learning0
Large Language Models are Biased Reinforcement LearnersCode0
In-context Contrastive Learning for Event Causality IdentificationCode1
Large Language Models in Wireless Application Design: In-Context Learning-enhanced Automatic Network Intrusion Detection0
Language Models can Exploit Cross-Task In-context Learning for Data-Scarce Novel TasksCode0
Feature-Adaptive and Data-Scalable In-Context LearningCode0
Dynamic In-context Learning with Conversational Models for Data Extraction and Materials Property PredictionCode1
Analogist: Out-of-the-box Visual In-Context Learning with Image Diffusion Model0
Show:102550
← PrevPage 24 of 46Next →

No leaderboard results yet.