SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1630116350 of 17610 papers

TitleStatusHype
Modelling Word Burstiness in Natural Language: A Generalised Polya Process for Document Language Models in Information Retrieval0
Neural Networks Compression for Language Modeling0
CLaC @ QATS: Quality Assessment for Text Simplification0
Syllable-level Neural Language Model for Agglutinative Language0
Comparison of Decoding Strategies for CTC Acoustic Models0
VQS: Linking Segmentations to Questions and Answers for Supervised Attention in VQA and Question-Focused Semantic SegmentationCode0
Early Improving Recurrent Elastic Highway Network0
Neural Machine Translation Leveraging Phrase-based Models in a Hybrid Search0
TandemNet: Distilling Knowledge from Medical Images Using Diagnostic Reports as Optional Semantic References0
Location Name Extraction from Targeted Text Streams using Gazetteer-based Statistical Language ModelsCode0
Regularizing and Optimizing LSTM Language ModelsCode1
Revisiting Activation Regularization for Language RNNs0
Dynamic Entity Representations in Neural Language ModelsCode0
Detecting Anxiety through RedditCode0
BUCC 2017 Shared Task: a First Attempt Toward a Deep Learning Framework for Identifying Parallel Sentences in Comparable Corpora0
Representing Compositionality based on Multiple Timescales Gated Recurrent Neural Networks with Adaptive Temporal Hierarchy for Character-Level Language Models0
Neural Sequence-to-sequence Learning of Internal Word Structure0
Parsing with Context Embeddings0
Learning Word Representations with Regularization from Prior Knowledge0
A Semi-universal Pipelined Approach to the CoNLL 2017 UD Shared Task0
A Joint Model for Semantic Sequences: Frames, Entities, Sentiments0
Classifying Semantic Clause Types: Modeling Context and Genre Characteristics with Recurrent Neural Networks and Attention0
Sheffield at SemEval-2017 Task 9: Transition-based language generation from AMR.0
UWAV at SemEval-2017 Task 7: Automated feature-based system for locating puns0
A Generative Parser with a Discriminative Recognition Algorithm0
Bayesian Sparsification of Recurrent Neural NetworksCode1
Probabilistic Graphical Models for Credibility Analysis in Evolving Online Communities0
Self-organized Hierarchical Softmax0
Synthesising Sign Language from semantics, approaching "from the target and back"0
Dual Rectified Linear Units (DReLUs): A Replacement for Tanh Activation Functions in Quasi-Recurrent Neural NetworksCode1
Exploring Neural Transducers for End-to-End Speech Recognition0
Transition-Based Generation from Abstract Meaning RepresentationsCode0
LV-ROVER: Lexicon Verified Recognizer Output Voting Error Reduction0
Language modeling with Neural trans-dimensional random fields0
OBJ2TEXT: Generating Visually Descriptive Language from Object Layouts0
Attention-Based End-to-End Speech Recognition on Voice Search0
High-risk learning: acquiring new word vectors from tiny dataCode0
Syllable-aware Neural Language Models: A Failure to Beat Character-aware OnesCode0
Improving Language Modeling using Densely Connected Recurrent Neural Networks0
Learning Visually Grounded Sentence Representations0
On the State of the Art of Evaluation in Neural Language ModelsCode0
A Simple Language Model based on PMI Matrix Approximations0
Do Neural Nets Learn Statistical Laws behind Natural Language?0
Controlling Linguistic Style Aspects in Neural Language Generation0
MDNet: A Semantically and Visually Interpretable Medical Image Diagnosis Network0
An Embedded Deep Learning based Word PredictionCode0
Multiscale sequence modeling with a learned dictionary0
Weighted-Entropy-Based Quantization for Deep Neural Networks0
ER3: A Unified Framework for Event Retrieval, Recognition and Recounting0
Sentence Embedding for Neural Machine Translation Domain Adaptation0
Show:102550
← PrevPage 327 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified