SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1530115350 of 17610 papers

TitleStatusHype
Restricted Recurrent Neural NetworksCode0
Fine-tuning BERT for Joint Entity and Relation Extraction in Chinese Medical TextCode0
Latent Relation Language Models0
WikiCREM: A Large Unsupervised Corpus for Coreference ResolutionCode0
LXMERT: Learning Cross-Modality Encoder Representations from TransformersCode1
Universal Adversarial Triggers for Attacking and Analyzing NLPCode0
Encoder-Agnostic Adaptation for Conditional Language GenerationCode0
Question Answering based Clinical Text Structuring Using Pre-trained Language Model0
Musical Rhythm Transcription Based on Bayesian Piece-Specific Score Models Capturing Repetitions0
Parsimonious Morpheme Segmentation with an Application to Enriching Word Embeddings0
EmotionX-IDEA: Emotion BERT -- an Affectional Model for ConversationCode0
Language Features Matter: Effective Language Representations for Vision-Language Tasks0
Leveraging Sentence Similarity in Natural Language Generation: Improving Beam Search using Range Voting0
Unicoder-VL: A Universal Encoder for Vision and Language by Cross-modal Pre-training0
Incorporating Word and Subword Units in Unsupervised Machine Translation Using Language Model Rescoring0
SenseBERT: Driving Some Sense into BERT0
Visualizing and Understanding the Effectiveness of BERT0
Entity-aware ELMo: Learning Contextual Entity Representation for Entity Disambiguation0
SG-Net: Syntax-Guided Machine Reading ComprehensionCode0
On The Evaluation of Machine Translation Systems Trained With Back-TranslationCode0
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding0
Predicting 3D Human Dynamics from VideoCode0
Incorporating Relation Knowledge into Commonsense Reading Comprehension with Multi-task Learning0
An Effective Domain Adaptive Post-Training Method for BERT in Response SelectionCode0
TAPER: Time-Aware Patient EHR RepresentationCode0
Multi-modality Latent Interaction Network for Visual Question Answering0
Unsupervised Stemming based Language Model for Telugu Broadcast News Transcription0
VisualBERT: A Simple and Performant Baseline for Vision and LanguageCode1
On the Variance of the Adaptive Learning Rate and BeyondCode1
Clustering of Deep Contextualized Representations for Summarization of Biomedical TextsCode0
Exploring Neural Net Augmentation to BERT for Question Answering on SQUAD 2.00
Self-Knowledge Distillation in Natural Language Processing0
Deep learning languages: a key fundamental shift from probabilities to weights?0
GTCOM Neural Machine Translation Systems for WMT190
CUED@WMT19:EWC\&LMs0
DBMS-KU Interpolation for WMT19 News Translation Task0
The RWTH Aachen University Machine Translation Systems for WMT 20190
The LMU Munich Unsupervised Machine Translation System for WMT190
Visualizing RNN States with Predictive Semantic Encodings0
ZCU-NLP at MADAR 2019: Recognizing Arabic Dialects0
The SMarT Classifier for Arabic Fine-Grained Dialect Identification0
JHU System Description for the MADAR Arabic Dialect Identification Shared Task0
Simple Construction of Mixed-Language Texts for Vocabulary Learning0
KU\_ai at MEDIQA 2019: Domain-specific Pre-training and Transfer Learning for Medical NLI0
Team JUST at the MADAR Shared Task on Arabic Fine-Grained Dialect Identification0
Arabic Dialect Identification for Travel and Twitter Text0
hULMonA: The Universal Language Model in ArabicCode0
Finding Hierarchical Structure in Neural Stacks Using Unsupervised ParsingCode0
ArbDialectID at MADAR Shared Task 1: Language Modelling and Ensemble Learning for Fine Grained Arabic Dialect Identification0
An LSTM Adaptation Study of (Un)grammaticalityCode0
Show:102550
← PrevPage 307 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified