SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1600116050 of 17610 papers

TitleStatusHype
A Comparison of Character Neural Language Model and Bootstrapping for Language Identification in Multilingual Noisy Texts0
A Multi-Context Character Prediction Model for a Brain-Computer Interface0
Entropy-Based Subword Mining with an Application to Word Embeddings0
CSReader at SemEval-2018 Task 11: Multiple Choice Question Answering as Textual Entailment0
YNU Deep at SemEval-2018 Task 12: A BiLSTM Model with Neural Attention for Argument Reasoning Comprehension0
YNU\_AI1799 at SemEval-2018 Task 11: Machine Comprehension using Commonsense Knowledge of Different model ensemble0
Using Morphological Knowledge in Open-Vocabulary Neural Language Models0
A Melody-Conditioned Lyrics Language ModelCode0
Efficient Sequence Learning with Group Recurrent Networks0
Generative Bridging Network for Neural Sequence Prediction0
Binarized LSTM Language Model0
Pivot Based Language Modeling for Improved Neural Domain Adaptation0
The Context-Dependent Additive Recurrent Neural Net0
Target Foresight Based Attention for Neural Machine Translation0
Neural Syntactic Generative Models with Exact Marginalization0
Video Description: A Survey of Methods, Datasets and Evaluation Metrics0
Code-Switching Language Modeling using Syntax-Aware Multi-Task Learning0
Unsupervised Text Style Transfer using Language Models as DiscriminatorsCode0
Like a Baby: Visually Situated Neural Language Acquisition0
Table-to-Text: Describing Table Region with Natural Language0
Can DNNs Learn to Lipread Full Sentences?0
Sigsoftmax: Reanalysis of the Softmax Bottleneck0
Language Modeling Teaches You More than Translation Does: Lessons Learned Through Auxiliary Task Analysis0
Fake Sentence Detection as a Training Task for Sentence Encoding0
Implicit Language Model in LSTM for OCRCode0
Pushing the bounds of dropoutCode0
Adversarial Training of Word2Vec for Basket Completion0
Character-based Neural Networks for Sentence Pair ModelingCode0
A Simple Cache Model for Image RecognitionCode0
Numeracy for Language Models: Evaluating and Improving their Ability to Predict NumbersCode0
SemStyle: Learning to Generate Stylised Image Captions using Unaligned TextCode0
A Context-based Approach for Dialogue Act Recognition using Simple Recurrent Neural NetworksCode0
Contextual Augmentation: Data Augmentation by Words with Paradigmatic RelationsCode0
A Comparison of Modeling Units in Sequence-to-Sequence Speech Recognition with the Transformer on Mandarin Chinese0
Continuous Learning in a Hierarchical Multiscale Neural Network0
Building Language Models for Text with Named EntitiesCode0
Deep RNNs Encode Soft Hierarchical Syntax0
WISER: A Semantic Approach for Expert Finding in Academia based on Entity Linking0
Improved training of end-to-end attention models for speech recognitionCode1
Polite Dialogue Generation Without Parallel DataCode1
Exploring Hyper-Parameter Optimization for Neural Machine Translation on GPU Architectures0
Disentangling Language and Knowledge in Task-Oriented DialogsCode0
Noisin: Unbiased Regularization for Recurrent Neural Networks0
Modeling infant segmentation of two morphologically diverse languages0
Open ASR for Icelandic: Resources and a Baseline System0
Portable Spelling Corrector for a Less-Resourced Language: Amharic0
Preparation and Usage of Xhosa Lexicographical Data for a Multilingual, Federated EnvironmentCode0
Modeling Northern Haida Verb Morphology0
TF-LM: TensorFlow-based Language Modeling ToolkitCode0
Towards Language Technology for Mi'kmaq0
Show:102550
← PrevPage 321 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified