SOTAVerified

Language Modeling

Papers

Showing 926950 of 14182 papers

TitleStatusHype
Improving Factuality and Reasoning in Language Models through Multiagent DebateCode2
Training Diffusion Models with Reinforcement LearningCode2
Pengi: An Audio Language Model for Audio TasksCode2
Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language ModelCode2
Improving Language Model Negotiation with Self-Play and In-Context Learning from AI FeedbackCode2
DoReMi: Optimizing Data Mixtures Speeds Up Language Model PretrainingCode2
StructGPT: A General Framework for Large Language Model to Reason over Structured DataCode2
Large Language Model Guided Tree-of-ThoughtCode2
How to Index Item IDs for Recommendation Foundation ModelsCode2
Huatuo-26M, a Large-scale Chinese Medical QA DatasetCode2
TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with RecommendationCode2
PMC-LLaMA: Towards Building Open-source Language Models for MedicineCode2
Scaling Transformer to 1M tokens and beyond with RMTCode2
WinCLIP: Zero-/Few-Shot Anomaly Classification and SegmentationCode2
RepoCoder: Repository-Level Code Completion Through Iterative Retrieval and GenerationCode2
Cross-Modal Implicit Relation Reasoning and Aligning for Text-to-Image Person RetrievalCode2
Implicit Neural Representation for Cooperative Low-light Image EnhancementCode2
Large Language Model Instruction Following: A Survey of Progresses and ChallengesCode2
Stabilizing Transformer Training by Preventing Attention Entropy CollapseCode2
PaLM-E: An Embodied Multimodal Language ModelCode2
OpenICL: An Open-Source Framework for In-context LearningCode2
Vid2Seq: Large-Scale Pretraining of a Visual Language Model for Dense Video CaptioningCode2
SpikeGPT: Generative Pre-trained Language Model with Spiking Neural NetworksCode2
Language Model Crossover: Variation through Few-Shot PromptingCode2
Hyena Hierarchy: Towards Larger Convolutional Language ModelsCode2
Show:102550
← PrevPage 38 of 568Next →

No leaderboard results yet.