SOTAVerified

In-Context Learning

Papers

Showing 51100 of 2297 papers

TitleStatusHype
A Survey on Mixture of ExpertsCode3
Revisiting VerilogEval: A Year of Improvements in Large-Language Models for Hardware Code GenerationCode3
QuRating: Selecting High-Quality Data for Training Language ModelsCode3
Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human SupervisionCode3
AnomalyGPT: Detecting Industrial Anomalies Using Large Vision-Language ModelsCode3
OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with TextCode3
PromptDresser: Improving the Quality and Controllability of Virtual Try-On via Generative Textual Prompt and Prompt-aware MaskCode3
The Surprising Effectiveness of Test-Time Training for Few-Shot LearningCode3
Can LLMs Learn New Concepts Incrementally without Forgetting?Code2
NaturalSpeech 2: Latent Diffusion Models are Natural and Zero-Shot Speech and Singing SynthesizersCode2
CoMM: A Coherent Interleaved Image-Text Dataset for Multimodal Understanding and GenerationCode2
MMICL: Empowering Vision-language Model with Multi-Modal In-Context LearningCode2
NeoBERT: A Next-Generation BERTCode2
Memory MosaicsCode2
MedFMC: A Real-world Dataset and Benchmark For Foundation Model Adaptation in Medical Image ClassificationCode2
Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to RefuseCode2
Many-Shot In-Context Learning in Multimodal Foundation ModelsCode2
GSCo: Towards Generalizable AI in Medicine via Generalist-Specialist CollaborationCode2
MG-Verilog: Multi-grained Dataset Towards Enhanced LLM-assisted Verilog GenerationCode2
Long-Context Language Modeling with Parallel Context EncodingCode2
Chain-of-Table: Evolving Tables in the Reasoning Chain for Table UnderstandingCode2
CausalPFN: Amortized Causal Effect Estimation via In-Context LearningCode2
LLoCO: Learning Long Contexts OfflineCode2
LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA CompositionCode2
Linear Transformers with Learnable Kernel Functions are Better In-Context ModelsCode2
Let's Fuse Step by Step: A Generative Fusion Decoding Algorithm with LLMs for Multi-modal Text RecognitionCode2
Linearizing Large Language ModelsCode2
Large Language Models are In-Context Molecule LearnersCode2
Adapting Language Models to Compress ContextsCode2
LayoutPrompter: Awaken the Design Ability of Large Language ModelsCode2
LLMs in the Imaginarium: Tool Learning through Simulated Trial and ErrorCode2
Making Large Language Models Perform Better in Knowledge Graph CompletionCode2
NoteLLM-2: Multimodal Large Representation Models for RecommendationCode2
KICGPT: Large Language Model with Knowledge in Context for Knowledge Graph CompletionCode2
JoLT: Joint Probabilistic Predictions on Tabular Data Using LLMsCode2
Knowledge Circuits in Pretrained TransformersCode2
Borrowing Treasures from Neighbors: In-Context Learning for Multimodal Learning with Missing Modalities and Data ScarcityCode2
InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized RationalesCode2
In-Context Language Learning: Architectures and AlgorithmsCode2
In-Context Learning Unlocked for Diffusion ModelsCode2
AMAGO-2: Breaking the Multi-Task Barrier in Meta-Reinforcement Learning with TransformersCode2
KV Shifting Attention Enhances Language ModelingCode2
Just read twice: closing the recall gap for recurrent language modelsCode2
HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide ResolutionCode2
Improving CLIP Training with Language RewritesCode2
ConTextTab: A Semantics-Aware Tabular In-Context LearnerCode2
How do Large Language Models Learn In-Context? Query and Key Matrices of In-Context Heads are Two Towers for Metric LearningCode2
Black-Box Tuning for Language-Model-as-a-ServiceCode2
Improving Language Model Negotiation with Self-Play and In-Context Learning from AI FeedbackCode2
Graph-ToolFormer: To Empower LLMs with Graph Reasoning Ability via Prompt Dataset Augmented by ChatGPTCode2
Show:102550
← PrevPage 2 of 46Next →

No leaderboard results yet.