SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1585115900 of 17610 papers

TitleStatusHype
Knowledge-Aware Conversational Semantic Parsing Over Web Tables0
Evaluating Semantic Rationality of a Sentence: A Sememe-Word-Matching Neural Network based on HowNet0
Context-Free Transductions with Neural StacksCode0
Noise Contrastive Estimation and Negative Sampling for Conditional Models: Consistency and Statistical Efficiency0
RNNs as psycholinguistic subjects: Syntactic state and grammatical dependencyCode0
t-Exponential Memory Networks for Question-Answering Machines0
Random Language Model0
Unsupervised Statistical Machine TranslationCode1
Chittron: An Automatic Bangla Image Captioning System0
Finding the Answers with Definition Models0
Simple Fusion: Return of the Language ModelCode0
Neural DrugNet0
Indicatements that character language models learn English morpho-syntactic units and regularities0
Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items0
Spherical Latent Spaces for Stable Variational AutoencodersCode0
Direct Output Connection for a High-Rank Language ModelCode0
A Neural Model of Adaptation in ReadingCode0
Grammar Induction with Neural Language Models: An Unusual ReplicationCode0
A Unified Multilingual Handwriting Recognition System using multigrams sub-lexical units0
Hierarchical Quantized Representations for Script GenerationCode0
A Quantum Many-body Wave Function Inspired Language Modeling ApproachCode0
Disfluency Detection using a Noisy Channel Model and a Deep Neural Language Model0
Rational RecurrencesCode0
Large Margin Neural Language Model0
Pyramidal Recurrent Unit for Language ModelingCode0
Targeted Syntactic Evaluation of Language ModelsCode0
Predefined Sparseness in Recurrent Sequence ModelsCode0
Generating Text through Adversarial Training using Skip-Thought VectorsCode0
Adversarially Regularising Neural NLI Models to Integrate Logical Background KnowledgeCode0
Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information0
The Importance of Generation Order in Language Modeling0
Improving Abstraction in Text Summarization0
Neural Architecture OptimizationCode0
Improved Chord Recognition by Combining Duration and Harmonic Language Models0
Automatic Chord Recognition with Higher-Order Harmonic Language Modelling0
Improved Language Modeling by Decoding the Past0
RedSync : Reducing Synchronization Traffic for Distributed Deep Learning0
Fake Sentence Detection as a Training Task for Sentence Encoding0
Document Informed Neural Autoregressive Topic ModelsCode0
Character-Level Language Modeling with Deeper Self-AttentionCode0
Learning to Write Notes in Electronic Health Records0
Language Model Supervision for Handwriting Recognition Model Adaptation0
Large Scale Language Modeling: Converging on 40GB of Text in Four HoursCode0
Open Information Extraction from Conjunctive Sentences0
Learning with Noise-Contrastive Estimation: Easing training by learning to scale0
Toward Better Loanword Identification in Uyghur Using Cross-lingual Word Embeddings0
Sentence Weighting for Neural Machine Translation Domain Adaptation0
Modeling with Recurrent Neural Networks for Open Vocabulary Slots0
Learning to Generate Word Representations using Subword Information0
On-Device Neural Language Model Based Word PredictionCode0
Show:102550
← PrevPage 318 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified