SOTAVerified

Language Modeling

Papers

Showing 92269250 of 14182 papers

TitleStatusHype
Bridging CLIP and StyleGAN through Latent Alignment for Image Editing0
Scaling Up Probabilistic Circuits by Latent Variable DistillationCode0
QAScore -- An Unsupervised Unreferenced Metric for the Question Generation Evaluation0
Better Pre-Training by Reducing Representation Confusion0
Cross-Align: Modeling Deep Cross-lingual Interactions for Word AlignmentCode1
Controllable Dialogue Simulation with In-Context LearningCode1
InfoCSE: Information-aggregated Contrastive Learning of Sentence EmbeddingsCode1
AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models0
Learning Fine-Grained Visual Understanding for Video Question Answering via Decoupling Spatial-Temporal ModelingCode1
Named Entity Recognition in Twitter: A Dataset and Analysis on Short-Term Temporal ShiftsCode2
Novice Type Error Diagnosis with Natural Language Models0
Pix2Struct: Screenshot Parsing as Pretraining for Visual Language UnderstandingCode2
PQLM -- Multilingual Decentralized Portable Quantum Language Model for Privacy Protection0
Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models0
Improving Large-scale Paraphrase Acquisition and Generation0
Improving the Sample Efficiency of Prompt Tuning with Domain AdaptationCode0
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot LearnersCode1
Conversational Semantic Role Labeling with Predicate-Oriented Latent Graph0
Honest Students from Untrusted Teachers: Learning an Interpretable Question-Answering Pipeline from a Pretrained Language Model0
GLM-130B: An Open Bilingual Pre-trained ModelCode6
CCC-wav2vec 2.0: Clustering aided Cross Contrastive Self-supervised learning of speech representationsCode1
Bayesian Prompt Learning for Image-Language Model GeneralizationCode1
Towards Improving Faithfulness in Abstractive SummarizationCode1
The Surprising Computational Power of Nondeterministic Stack RNNsCode1
Less is More: Task-aware Layer-wise Distillation for Language Model CompressionCode1
Show:102550
← PrevPage 370 of 568Next →

No leaderboard results yet.