SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1120111250 of 17610 papers

TitleStatusHype
Bayesian Prompt Learning for Image-Language Model GeneralizationCode1
Towards Improving Faithfulness in Abstractive SummarizationCode1
Knowledge Unlearning for Mitigating Privacy Risks in Language ModelsCode1
The Surprising Computational Power of Nondeterministic Stack RNNsCode1
Less is More: Task-aware Layer-wise Distillation for Language Model CompressionCode1
When to Make Exceptions: Exploring Language Models as Accounts of Human Moral JudgmentCode1
The Effectiveness of Masked Language Modeling and Adapters for Factual Knowledge InjectionCode0
SpeechCLIP: Integrating Speech with Pre-Trained Vision and Language ModelCode1
Enriching Vulnerability Reports Through Automated and Augmented Description Summarization0
ContraCLM: Contrastive Learning For Causal Language ModelCode1
A Non-monotonic Self-terminating Language ModelCode0
LASP: Text-to-Text Optimization for Language-Aware Soft Prompting of Vision & Language ModelsCode1
The boundaries of meaning: a case study in neural machine translation0
Predictive Text for Agglutinative and Polysynthetic Languages0
mattica@SMM4H’22: Leveraging sentiment for stance & premise joint learning0
PLN CMM at SocialDisNER: Improving Detection of Disease Mentions in Tweets by Using Document-Level Features0
The Only Chance to Understand: Machine Translation of the Severely Endangered Low-resource Languages of Eurasia0
The Role of Context in Detecting the Target of Hate Speech0
Team AINLPML @ MuP in SDP 2021: Scientific Document Summarization by End-to-End Extractive and Abstractive Approach0
Neural-Guided Program Synthesis of Information Extraction Rules Using Self-Supervision0
The COVID That Wasn’t: Counterfactual Journalism Using GPT0
KUL@SMM4H’22: Template Augmented Adaptive Pre-training for Tweet Classification0
PingAnTech at SMM4H task1: Multiple pre-trained model approaches for Adverse Drug Reactions0
Using Language Models to Improve Rule-based Linguistic Annotation of Modern Historical Japanese CorporaCode0
Transfer Learning Improves French Cross-Domain Dialect Identification: NRC @ VarDial 20220
A Japanese Masked Language Model for Academic DomainCode0
CompLx@SMM4H’22: In-domain pretrained language models for detection of adverse drug reaction mentions in English tweets0
ARGUABLY@SMM4H’22: Classification of Health Related Tweets using Ensemble, Zero-Shot and Fine-Tuned Language Model0
Automatic Detection of Borrowings in Low-Resource Languages of the Caucasus: Andic branch0
Improving Code-switched ASR with Linguistic Information0
Asymmetric Mutual Learning for Multi-source Unsupervised Sentiment Adaptation with Dynamic Feature Network0
Can We Guide a Multi-Hop Reasoning Language Model to Incrementally Learn at Each Single-Hop?Code0
Can We Train a Language Model Inside an End-to-End ASR Model? - Investigating Effective Implicit Language Modeling0
Can Data Diversity Enhance Learning Generalization?0
Automatic Nominalization of Clauses0
A Closer Look at Parameter Contributions When Training Neural Language and Translation Models0
How about Time? Probing a Multilingual Language Model for Temporal RelationsCode0
A Simple Model for Distantly Supervised Relation Extraction0
A-TIP: Attribute-aware Text Infilling via Pre-trained Language Model0
A Domain Knowledge Enhanced Pre-Trained Language Model for Vertical Search: Case Study on Medicinal ProductsCode0
ConnPrompt: Connective-cloze Prompt Learning for Implicit Discourse Relation RecognitionCode0
BECEL: Benchmark for Consistency Evaluation of Language ModelsCode1
Event Causality Identification via Derivative Prompt Joint LearningCode1
Don’t Judge a Language Model by Its Last Layer: Contrastive Learning with Layer-Wise Attention PoolingCode0
DocQueryNet: Value Retrieval with Arbitrary Queries for Form-like DocumentsCode1
Does Meta-learning Help mBERT for Few-shot Question Generation in a Cross-lingual Transfer Setting for Indic Languages?0
Deciphering and Characterizing Out-of-Vocabulary Words for Morphologically Rich Languages0
Knowledge Distillation with Reptile Meta-Learning for Pretrained Language Model CompressionCode0
Speaker Clustering in Textual Dialogue with Pairwise Utterance Relation and Cross-corpus Dialogue Act Supervision0
NSP-BERT: A Prompt-based Few-Shot Learner through an Original Pre-training Task —— Next Sentence Prediction0
Show:102550
← PrevPage 225 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified