SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1310113150 of 17610 papers

TitleStatusHype
Extracting Semantics from Maintenance Records0
A Transformer-based Math Language Model for Handwritten Math Expression Recognition0
DEMix Layers: Disentangling Domains for Modular Language ModelingCode1
Perturbing Inputs for Fragile Interpretations in Deep Natural Language ProcessingCode0
Mounting Video Metadata on Transformer-based Language Model for Open-ended Video Question Answering0
BERTHop: An Effective Vision-and-Language Model for Chest X-ray Disease DiagnosisCode0
BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key Information Extraction from DocumentsCode1
SynCoBERT: Syntax-Guided Multi-Modal Contrastive Pre-Training for Code Representation0
IntenT5: Search Result Diversification using Causal Language Models0
Do Images really do the Talking? Analysing the significance of Images in Tamil Troll meme classificationCode0
Noisy Channel Language Model Prompting for Few-Shot Text ClassificationCode1
Leveraging Commonsense Knowledge on Classifying False News and Determining Checkworthiness of Claims0
Language Model Evaluation in Open-ended Text Generation0
W2v-BERT: Combining Contrastive Learning and Masked Language Modeling for Self-Supervised Speech Pre-TrainingCode3
Towards Zero-shot Language Modeling0
LadRa-Net: Locally-Aware Dynamic Re-read Attention Net for Sentence Semantic Matching0
Offensive Language and Hate Speech Detection with Deep Learning and Transfer Learning0
Sentence Semantic Regression for Text Generation0
StrucTexT: Structured Text Understanding with Multi-Modal TransformersCode0
Deriving Disinformation Insights from Geolocalized Twitter CalloutsCode0
Knowledge Distillation from BERT Transformer to Speech Transformer for Intent ClassificationCode1
FMMformer: Efficient and Flexible Transformer via Decomposed Near-field and Far-field Attention0
Finetuning Pretrained Transformers into Variational AutoencodersCode1
Mitigating harm in language models with conditional-likelihood filtration0
Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text ClassificationCode1
Controlled Text Generation as Continuous Optimization with Multiple ConstraintsCode1
Curriculum learning for language modelingCode0
Exploiting BERT For Multimodal Target Sentiment Classification Through Input Space TranslationCode1
Your fairness may vary: Pretrained language model fairness in toxic text classification0
Large-Scale Differentially Private BERT0
LICHEE: Improving Language Model Pre-training with Multi-grained TokenizationCode0
Is My Model Using The Right Evidence? Systematic Probes for Examining Evidence-Based Tabular Reasoning0
Analyzing Speaker Information in Self-Supervised Models to Improve Zero-Resource Speech ProcessingCode0
Look Back Again: Dual Parallel Attention Network for Accurate and Robust Scene Text RecognitionCode0
Document-Grounded Goal-Oriented Dialogue Systems on Pre-Trained Language Model with Diverse Input Representation0
Entity and Evidence Guided Document-Level Relation Extraction0
Let’s be explicit about that: Distant supervision for implicit discourse relation classification via connective prediction0
Team “NoConflict” at CASE 2021 Task 1: Pretraining for Sentence-Level Protest Event Detection0
The University of Edinburgh’s Submission to the IWSLT21 Simultaneous Translation Task0
Personalized Response Generation with Tensor Factorization0
IBM MNLP IE at CASE 2021 Task 1: Multigranular and Multilingual Event Detection on Protest News0
Enhancing Language Generation with Effective Checkpoints of Pre-trained Language Model0
He is very intelligent, she is very beautiful? On Mitigating Social Biases in Language Modelling and Generation0
Probing Multi-modal Machine Translation with Pre-trained Language Model0
Multi-Lingual Question Generation with Language Agnostic Language ModelCode0
Small-Scale Cross-Language Authorship Attribution on Social Media Comments0
Text-in-Context: Token-Level Error Detection for Table-to-Text GenerationCode0
A Comparison of Sentence-Weighting Techniques for NMT0
Decoding, Fast and Slow: A Case Study on Balancing Trade-Offs in Incremental, Character-level Pragmatic Reasoning0
Controllable Sentence Simplification with a Unified Text-to-Text Transfer TransformerCode1
Show:102550
← PrevPage 263 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified