SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1665116700 of 17610 papers

TitleStatusHype
Using Term Position Similarity and Language Modeling for Bilingual Document Alignment0
Using Factored Word Representation in Neural Network Language Models0
Merged bilingual trees based on Universal Dependencies in Machine Translation0
ParFDA for Instance Selection for Statistical Machine Translation0
PJAIT Systems for the WMT 20160
The JHU Machine Translation Systems for WMT 20160
Recurrent Neural Network based Translation Quality Estimation0
T\"UB\.ITAK SMT System Submission for WMT20160
Normalized Log-Linear Interpolation of Backoff Language Models is Efficient0
UdS-(retrain|distributional|surface): Improving POS Tagging for OOV Words in German CMC and Web Data0
The RWTH Aachen University English-Romanian Machine Translation System for WMT 20160
JU-USAAR: A Domain Adaptive MT System0
SHEF-LIUM-NN: Sentence level Quality Estimation with Neural Network Features0
Sheffield Systems for the English-Romanian WMT Translation Task0
IXA Biomedical Translation System at WMT16 Biomedical Translation Task0
Shallow Discourse Parsing Using Convolutional Neural Network0
Leveraging Entity Linking and Related Language Projection to Improve Name Transliteration0
KSAnswer: Question-answering System of Kangwon National University and Sogang University in the 2016 BioASQ Challenge0
Phrase-Based SMT for Finnish with More Data, Better Models and Alternative Alignment and Translation Tools0
Jointly Learning to Embed and Predict with Multiple Languages0
Pronoun Prediction with Linguistic Features and Example Weighing0
Pronoun Prediction with Latent Anaphora Resolution0
Pronoun Language Model and Grammatical Heuristics for Aiding Pronoun Prediction0
Modeling Selectional Preferences of Verbs and Nouns in String-to-Tree Machine Translation0
Semi-supervised Convolutional Networks for Translation Adaptation with Tiny Amount of In-domain Data0
The AFRL-MITLL WMT16 News-Translation Task Systems0
Larger-Context Language Modelling with Recurrent Neural Network0
Modeling Concept Dependencies in a Scientific Corpus0
Moses-based official baseline for NEWS 20160
N-gram language models for massively parallel devices0
The TALP--UPC Spanish--English WMT Biomedical Task: Bilingual Embeddings and Char-based Neural Language Model Rescoring in a Phrase-based System0
Latent Tree Language ModelCode0
Novel Word Embedding and Translation-based Language Modeling for Extractive Speech Summarization0
Tie-breaker: Using language models to quantify gender bias in sports journalism0
Recurrent Highway NetworksCode0
Recurrent Memory Array StructuresCode0
Log-Linear RNNs: Towards Recurrent Neural Networks with Flexible Prior Knowledge0
Predicting and Understanding Law-Making with Word Vectors and an Ensemble Model0
Representing Documents and Queries as Sets of Word Embedded Vectors for Information Retrieval0
Using Word Embeddings for Automatic Query Expansion0
Gender and Interest Targeting for Sponsored Post Advertising at Tumblr0
NN-grams: Unifying neural network and n-gram language models for Speech Recognition0
A segmental framework for fully-unsupervised large-vocabulary speech recognitionCode0
On Multiplicative Integration with Recurrent Neural Networks0
Egyptian Arabic to English Statistical Machine Translation System for NIST OpenMT'20150
Two Discourse Driven Language Models for Semantics0
Watch What You Just Said: Image Captioning with Text-Conditional AttentionCode0
Bidirectional Long-Short Term Memory for Video Description0
Learning to Generate Compositional Color DescriptionsCode0
MuFuRU: The Multi-Function Recurrent Unit0
Show:102550
← PrevPage 334 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified