SOTAVerified

In-Context Learning

Papers

Showing 11511200 of 2297 papers

TitleStatusHype
Think from Words(TFW): Initiating Human-Like Cognition in Large Language Models Through Think from Words for Japanese Text-level Classification0
ThinkSum: Probabilistic reasoning over sets using large language models0
Think Twice Before Recognizing: Large Multimodal Models for General Fine-grained Traffic Sign Recognition0
TokenRec: Learning to Tokenize ID for LLM-based Generative Recommendation0
ToolNet: Connecting Large Language Models with Massive Tools via Tool Graph0
A Complete Survey on Contemporary Methods, Emerging Paradigms and Hybrid Approaches for Few-Shot Learning0
Towards ASR Robust Spoken Language Understanding Through In-Context Learning With Word Confusion Networks0
Towards Automated Functional Equation Proving: A Benchmark Dataset and A Domain-Specific In-Context Agent0
Towards Autonomous Agents: Adaptive-planning, Reasoning, and Acting in Language Models0
Towards Auto-Regressive Next-Token Prediction: In-Context Learning Emerges from Generalization0
Towards Better Understanding of In-Context Learning Ability from In-Context Uncertainty Quantification0
Towards Effective Disambiguation for Machine Translation with Large Language Models0
Towards Few-Shot Identification of Morality Frames using In-Context Learning0
Towards Global Optimal Visual In-Context Learning Prompt Selection0
Towards Informative Few-Shot Prompt with Maximum Information Gain for In-Context Learning0
Towards Lifelong Scene Graph Generation with Knowledge-ware In-context Prompt Learning0
Towards More Effective Table-to-Text Generation: Assessing In-Context Learning and Self-Evaluation with Open-Source Models0
Towards More Unified In-context Visual Understanding0
Towards Multi-modal Graph Large Language Model0
Towards Multimodal In-Context Learning for Vision & Language Models0
Towards Neural No-Resource Language Translation: A Comparative Evaluation of Approaches0
Towards No-Code Programming of Cobots: Experiments with Code Synthesis by Large Code Models for Conversational Programming0
Towards Optimizing a Retrieval Augmented Generation using Large Language Model on Academic Data0
Towards Predicting Any Human Trajectory In Context0
Towards Robust Prompts on Vision-Language Models0
Towards Safer Social Media Platforms: Scalable and Performant Few-Shot Harmful Content Moderation Using Large Language Models0
Towards Secure Program Partitioning for Smart Contracts with LLM's In-Context Learning0
Towards the Effect of Examples on In-Context Learning: A Theoretical Case Study0
Towards Understanding the Relationship between In-context Learning and Compositional Generalization0
Toward Understanding In-context vs. In-weight Learning0
TPTU-v2: Boosting Task Planning and Tool Usage of Large Language Model-based Agents in Real-world Systems0
TR2MTL: LLM based framework for Metric Temporal Logic Formalization of Traffic Rules0
Trained Transformers Learn Linear Models In-Context0
Training Dynamics of In-Context Learning in Linear Attention0
Training Dynamics of Multi-Head Softmax Attention for In-Context Learning: Emergence, Convergence, and Optimality0
Beyond Single-Task: Robust Multi-Task Length Generalization for LLMs0
Training Nonlinear Transformers for Chain-of-Thought Inference: A Theoretical Generalization Analysis0
How Do Nonlinear Transformers Learn and Generalize in In-Context Learning?0
Training Plug-n-Play Knowledge Modules with Deep Context Distillation0
Transfer Learning Beyond Bounded Density Ratios0
Transformer-Based Fault-Tolerant Control for Fixed-Wing UAVs Using Knowledge Distillation and In-Context Adaptation0
Transformer-based Wireless Symbol Detection Over Fading Channels0
On Understanding Attention-Based In-Context Learning for Categorical Data0
Transformer learns the cross-task prior and regularization for in-context learning0
Transformers are Deep Optimizers: Provable In-Context Learning for Deep Model Training0
Transformers are Minimax Optimal Nonparametric In-Context Learners0
Transformers Are Universally Consistent0
Transformers as Game Players: Provable In-context Game-playing Capabilities of Pre-trained Models0
Transformers for Supervised Online Continual Learning0
Transformers generalize differently from information stored in context vs in weights0
Show:102550
← PrevPage 24 of 46Next →

No leaderboard results yet.