SOTAVerified

In-Context Learning

Papers

Showing 20512100 of 2297 papers

TitleStatusHype
Neural Machine Translation Models Can Learn to be Few-shot Learners0
An Empirical Evaluation of Prompting Strategies for Large Language Models in Zero-Shot Clinical Natural Language Processing0
Ambiguity-Aware In-Context Learning with Large Language Models0
Speech-to-Speech Translation with Discrete-Unit-Based Style Transfer0
CONVERSER: Few-Shot Conversational Dense Retrieval with Synthetic Data GenerationCode0
Can Whisper perform speech-based in-context learning?0
Breaking through the learning plateaus of in-context learning in Transformer0
Narrowing the Gap between Supervised and Unsupervised Sentence Representation Learning with Large Language ModelCode0
Textbooks Are All You Need II: phi-1.5 technical report0
Uncovering mesa-optimization algorithms in Transformers0
MMHQA-ICL: Multimodal In-context Learning for Hybrid Question Answering over Text, Tables and Images0
FIAT: Fusing learning paradigms with Instruction-Accelerated Tuning0
EPA: Easy Prompt Augmentation on Large Language Models via Multiple Sources and Multiple Targets0
Improving Open Information Extraction with Large Language Models: A Study on Demonstration Uncertainty0
Gender-specific Machine Translation with Large Language Models0
Gated recurrent neural networks discover attention0
Business Process Text Sketch Automation Generation Using Large Language Model0
Breaking the Bank with ChatGPT: Few-Shot Text Classification for Finance0
Identifying and Mitigating the Security Risks of Generative AI0
Dysen-VDM: Empowering Dynamics-aware Text-to-Video Diffusion with LLMs0
Refashioning Emotion Recognition Modelling: The Advent of Generalised Large Models0
Zero- and Few-Shot Prompting with LLMs: A Comparative Study with Fine-tuned Models for Bangla Sentiment AnalysisCode0
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models0
HICL: Hashtag-Driven In-Context Learning for Social Media Natural Language UnderstandingCode0
Causal Intersectionality and Dual Form of Gradient Descent for Multimodal Analysis: a Case Study on Hateful MemesCode0
Inductive-bias Learning: Generating Code Models with Large Language ModelCode0
Exploring Demonstration Ensembling for In-context LearningCode0
Building Emotional Support Chatbots in the Era of LLMs0
RAVEN: In-Context Learning with Retrieval-Augmented Encoder-Decoder Language ModelsCode0
The Devil is in the Errors: Leveraging Large Language Models for Fine-grained Machine Translation EvaluationCode0
Large Language Model Prompt Chaining for Long Legal Document Classification0
FLIRT: Feedback Loop In-context Red Teaming0
Automated Distractor and Feedback Generation for Math Multiple-choice Questions via In-context LearningCode0
Skills-in-Context Prompting: Unlocking Compositionality in Large Language Models0
Investigating the Learning Behaviour of In-context Learning: A Comparison with Supervised LearningCode0
Reasoning before Responding: Integrating Commonsense-based Causality Explanation for Empathetic Response Generation0
Exploiting the Potential of Seq2Seq Models as Robust Few-Shot Learners0
Metric-Based In-context Learning: A Case Study in Text SimplificationCode0
In-Context Learning Learns Label Relationships but Is Not Conventional LearningCode0
Controlling Equational Reasoning in Large Language Models with Prompt Interventions0
SINC: Self-Supervised In-Context Learning for Vision-Language Tasks0
Mega-TTS 2: Boosting Prompting Mechanisms for Zero-Shot Speech Synthesis0
The Power of Large Language Models for Wireless Communication System Development: A Case Study on FPGA Platforms0
Exploring the Integration of Large Language Models into Automatic Speech Recognition Systems: An Empirical Study0
AutoHint: Automatic Prompt Optimization with Hint GenerationCode0
Unsupervised Calibration through Prior Adaptation for Text Classification using Large Language ModelsCode0
Towards Understanding In-Context Learning with Contrastive Demonstrations and Saliency MapsCode0
Large Language Models as General Pattern Machines0
Assessing the efficacy of large language models in generating accurate teacher responses0
One Step of Gradient Descent is Provably the Optimal In-Context Learner with One Layer of Linear Self-Attention0
Show:102550
← PrevPage 42 of 46Next →

No leaderboard results yet.