SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1010110150 of 17610 papers

TitleStatusHype
Movie2Story: A framework for understanding videos and telling stories in the form of novel text0
Moving Beyond LDA: A Comparison of Unsupervised Topic Modelling Techniques for Qualitative Data Analysis of Online Communities0
CorNav: Autonomous Agent with Self-Corrected Planning for Zero-Shot Vision-and-Language Navigation0
MoxE: Mixture of xLSTM Experts with Entropy-Aware Routing for Efficient Language Modeling0
MPIC: Position-Independent Multimodal Context Caching System for Efficient MLLM Serving0
mPLUG-DocOwl: Modularized Multimodal Large Language Model for Document Understanding0
mPLUG-PaperOwl: Scientific Diagram Analysis with the Multimodal Large Language Model0
M-RAG: Reinforcing Large Language Model Performance through Retrieval-Augmented Generation with Multiple Partitions0
MRIR: Integrating Multimodal Insights for Diffusion-based Realistic Image Restoration0
MR-MLLM: Mutual Reinforcement of Multimodal Comprehension and Vision Perception0
MR-P: A Parallel Decoding Algorithm for Iterative Refinement Non-Autoregressive Translation0
MRP-LLM: Multitask Reflective Large Language Models for Privacy-Preserving Next POI Recommendation0
MSA at BEA 2025 Shared Task: Disagreement-Aware Instruction Tuning for Multi-Dimensional Evaluation of LLMs as Math Tutors0
MSA Transformer0
MSDiagnosis: A Benchmark for Evaluating Large Language Models in Multi-Step Clinical Diagnosis0
MSD-LLM: Predicting Ship Detention in Port State Control Inspections with Large Language Model0
MSG-BART: Multi-granularity Scene Graph-Enhanced Encoder-Decoder Language Model for Video-grounded Dialogue Generation0
MS-HuBERT: Mitigating Pre-training and Inference Mismatch in Masked Language Modelling methods for learning Speech Representations0
mSLAM: Massively multilingual joint pre-training for speech and text0
MSLM-S2ST: A Multitask Speech Language Model for Textless Speech-to-Speech Translation with Speaker Style Preservation0
MST: Masked Self-Supervised Transformer for Visual Representation0
MSWA: Refining Local Attention with Multi-ScaleWindow Attention0
MTA-CLIP: Language-Guided Semantic Segmentation with Mask-Text Alignment0
MTLHealth: A Deep Learning System for Detecting Disturbing Content in Student Essays0
MTLM: Incorporating Bidirectional Text Information to Enhance Language Model Training in Speech Recognition Systems0
MTL-SLT: Multi-Task Learning for Spoken Language Tasks0
MT-Speech at SemEval-2022 Task 10: Incorporating Data Augmentation and Auxiliary Task with Cross-Lingual Pretrained Language Model for Structured Sentiment Analysis0
Mu^2SLAM: Multitask, Multilingual Speech and Language Models0
MuAP: Multi-step Adaptive Prompt Learning for Vision-Language Model with Missing Modality0
MUCS@LT-EDI-EACL2021:CoHope-Hope Speech Detection for Equality, Diversity, and Inclusion in Code-Mixed Texts0
MuFuRU: The Multi-Function Recurrent Unit0
Mukayese: Turkish NLP Strikes Back0
MulCode: A Multiplicative Multi-way Model for Compressing Neural Language Model0
MulDA: A Multilingual Data Augmentation Framework for Low-Resource Cross-Lingual NER0
MultiAgent Collaboration Attack: Investigating Adversarial Attacks in Large Language Model Collaborations via Debate0
Multi-agent Communication meets Natural Language: Synergies between Functional and Structural Language Learning0
Multi-agent KTO: Reinforcing Strategic Interactions of Large Language Model in Language Game0
Multi-agent Systems for Misinformation Lifecycle : Detection, Correction And Source Identification0
Multi-Attribute Constraint Satisfaction via Language Model Rewriting0
Multi-cell LSTM Based Neural Language Model0
Multichannel End-to-end Speech Recognition0
Multichannel Generative Language Model: Learning All Possible Factorizations Within and Across Channels0
Knowledgeable Dialogue Reading Comprehension on Key Turns0
Multi-Dialect Arabic Speech Recognition0
Multi-dimensional Evaluation of Empathetic Dialog Responses0
Multidimensional Human Activity Recognition With Large Language Model: A Conceptual Framework0
Multi-Style Transfer with Discriminative Feedback on Disjoint Corpus0
Multi-D Kneser-Ney Smoothing Preserving the Original Marginal Distributions0
Multi-Encoder Learning and Stream Fusion for Transformer-Based End-to-End Automatic Speech Recognition0
Multi-Epoch Matrix Factorization Mechanisms for Private Machine Learning0
Show:102550
← PrevPage 203 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified