SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1600116050 of 17610 papers

TitleStatusHype
A Language Model based Evaluator for Sentence Compression0
CYUT-III Team Chinese Grammatical Error Diagnosis System Report in NLPTEA-2018 CGED Shared Task0
A Neural Approach to Pun Generation0
Improving Beam Search by Removing Monotonic Constraint for Neural Machine Translation0
Document Modeling with External Attention for Sentence ExtractionCode0
GNEG: Graph-Based Negative Sampling for word2vec0
Connecting Language and Vision to Actions0
Investigating Effective Parameters for Fine-tuning of Word Embeddings Using Only a Small Corpus0
Compositional Language Modeling for Icon-Based Augmentative and Alternative Communication0
A Hybrid Learning Scheme for Chinese Word Embedding0
Peerus Review: a tool for scientific experts finding0
Contextual Language Model Adaptation for Conversational Agents0
Handling Massive N-Gram Datasets EfficientlyCode0
DPP-Net: Device-aware Progressive Search for Pareto-optimal Neural Architectures0
GroupReduce: Block-Wise Low-Rank Approximation for Neural Language Model Shrinking0
Extending Recurrent Neural Aligner for Streaming End-to-End Speech Recognition in Mandarin0
Evaluation of sentence embeddings in downstream and linguistic probing tasksCode0
Deep Lip Reading: a comparison of models and an online application0
Semantic Variation in Online Communities of Practice0
Dynamical Isometry and a Mean Field Theory of RNNs: Gating Enables Signal Propagation in Recurrent Neural Networks0
On Accurate Evaluation of GANs for Language Generation0
Multilingual End-to-End Speech Recognition with A Single Transformer on Low-Resource Languages0
Improving latent variable descriptiveness with AutoGen0
Finding Syntax in Human Encephalography with Beam Search0
Navigating with Graph Representations for Fast and Scalable Decoding of Neural Language Models0
Let's do it "again": A First Computational Approach to Detecting Adverbial Presupposition Triggers0
Are All Languages Equally Hard to Language-Model?0
Relational recurrent neural networksCode0
Self-Normalization Properties of Language Modeling0
Dynamically Hierarchy Revolution: DirNet for Compressing Recurrent Neural Network on Mobile Devices0
A Novel Framework for Recurrent Neural Networks with Enhancing Information Processing and Transmission between Units0
Improving neural morphological Tagging using Language Models0
CLUF: a Neural Model for Second Language Acquisition Modeling0
A Comparison of Character Neural Language Model and Bootstrapping for Language Identification in Multilingual Noisy Texts0
Entropy-Based Subword Mining with an Application to Word Embeddings0
Generative Bridging Network for Neural Sequence Prediction0
A Multi-Context Character Prediction Model for a Brain-Computer Interface0
CSReader at SemEval-2018 Task 11: Multiple Choice Question Answering as Textual Entailment0
A Melody-Conditioned Lyrics Language ModelCode0
Binarized LSTM Language Model0
Efficient Sequence Learning with Group Recurrent Networks0
Neural Sign Language TranslationCode0
Language Model Based Grammatical Error Correction without Annotated Training Data0
Making Convolutional Networks Recurrent for Visual Sequence Learning0
Neural Syntactic Generative Models with Exact Marginalization0
NILC at CWI 2018: Exploring Feature Engineering and Feature Learning0
Pivot Based Language Modeling for Improved Neural Domain Adaptation0
SB@GU at the Complex Word Identification 2018 Shared Task0
Target Foresight Based Attention for Neural Machine Translation0
The Context-Dependent Additive Recurrent Neural Net0
Show:102550
← PrevPage 321 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified