SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1465114700 of 17610 papers

TitleStatusHype
A Multimodal Educational Corpus of Oral Courses: Annotation, Analysis and Case Study0
Adaptation of Deep Bidirectional Transformers for Afrikaans Language0
GM-RKB WikiText Error Correction Task and BaselinesCode0
Embeddings for Named Entity Recognition in Geoscience Portuguese Literature0
Is Language Modeling Enough? Evaluating Effective Embedding Combinations0
SiBert: Enhanced Chinese Pre-trained Language Model with Sentence InsertionCode1
Jamo Pair Encoding: Subcharacter Representation-based Extreme Korean Vocabulary Compression for Efficient Subword Tokenization0
No Data to Crawl? Monolingual Corpus Creation from PDF Files of Truly low-Resource Languages in Peru0
Stylometry in a Bilingual Setup0
Wiki-40B: Multilingual Language Model Dataset0
Improving the Language Model for Low-Resource ASR with Online Text Corpora0
Acoustic-Phonetic Approach for ASR of Less Resourced Languages Using Monolingual and Cross-Lingual Information0
Implementation of Supervised Training Approaches for Monolingual Word Sense Alignment: ACDH-CH System Description for the MWSA Shared Task at GlobaLex 20200
Aggression Identification in Social Media: a Transfer Learning Based Approach0
DNN-Based Multilingual Automatic Speech Recognition for Wolaytta using Oromo Speech0
Automatic Myanmar Image Captioning using CNN and LSTM-Based Language Model0
Building Language Models for Morphological Rich Low-Resource Languages using Data from Related Donor Languages: the Case of Uyghur0
Offensive language detection in Arabic using ULMFiTCode1
On the Exploration of English to Urdu Machine Translation0
Neural Models for Predicting Celtic Mutations0
Speech Transcription Challenges for Resource Constrained Indigenous Language Cree0
The INCOMSLAV Platform: Experimental Website with Integrated Methods for Measuring Linguistic Distances and Asymmetries in Receptive Multilingualism0
Semi-supervised acoustic and language model training for English-isiZulu code-switched speech recognition0
IRIT at TRAC 20200
Multi-Task Learning using AraBert for Offensive Language Detection0
Language Models for Cloze Task Answer Generation in Russian0
POINTER: Constrained Progressive Text Generation via Insertion-based Generative Pre-trainingCode1
Multi-scale Transformer Language Models0
From Zero to Hero: On the Limitations of Zero-Shot Cross-Lingual Transfer with Multilingual Transformers0
Exploring Pre-training with Alignments for RNN Transducer based End-to-End Speech Recognition0
HERO: Hierarchical Encoder for Video+Language Omni-representation Pre-trainingCode1
AdapterFusion: Non-Destructive Task Composition for Transfer LearningCode2
Style Variation as a Vantage Point for Code-Switching0
Selecting Informative Contexts Improves Language Model Finetuning0
Recurrent Neural Network Language Models Always Learn English-Like Relative Clause AttachmentCode0
Perturbed Masking: Parameter-free Probing for Analyzing and Interpreting BERTCode0
Semi-Supervised Text Simplification with Back-Translation and Asymmetric Denoising Autoencoders0
Language Model Prior for Low-Resource Neural Machine TranslationCode1
Modelling Suspense in Short Stories as Uncertainty Reduction over Neural RepresentationCode1
Learning Music Helps You Read: Using Transfer to Study Linguistic Structure in Language ModelsCode1
Knowledge Injection into Dialogue Generation via Language Models0
Context based Text-generation using LSTM networks0
An Empirical Study of Pre-trained Transformers for Arabic Information ExtractionCode1
Few-Shot Learning for Opinion SummarizationCode1
APo-VAE: Text Generation in Hyperbolic Space0
Aspect-Controlled Neural Argument GenerationCode1
EnsembleGAN: Adversarial Learning for Retrieval-Generation Ensemble Model on Short-Text Conversation0
Rethinking Coherence Modeling: Synthetic vs. Downstream Tasks0
AMPERSAND: Argument Mining for PERSuAsive oNline DiscussionsCode1
Template Guided Text Generation for Task-Oriented Dialogue0
Show:102550
← PrevPage 294 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified