SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1235112400 of 17610 papers

TitleStatusHype
Improving Code-switched ASR with Linguistic Information0
Improving Event Temporal Relation Classification via Auxiliary Label-Aware Contrastive Learning0
A Domain Knowledge Enhanced Pre-Trained Language Model for Vertical Search: Case Study on Medicinal ProductsCode0
Automatic Nominalization of Clauses0
CompLx@SMM4H’22: In-domain pretrained language models for detection of adverse drug reaction mentions in English tweets0
An Exploration of Prompt-Based Zero-Shot Relation Extraction Method0
Automatic Detection of Borrowings in Low-Resource Languages of the Caucasus: Andic branch0
Data Synthesis and Iterative Refinement for Neural Semantic Parsing without Annotated Logical Forms0
Does Meta-learning Help mBERT for Few-shot Question Generation in a Cross-lingual Transfer Setting for Indic Languages?0
ARGUABLY@SMM4H’22: Classification of Health Related Tweets using Ensemble, Zero-Shot and Fine-Tuned Language Model0
Don’t Judge a Language Model by Its Last Layer: Contrastive Learning with Layer-Wise Attention PoolingCode0
A Simple Model for Distantly Supervised Relation Extraction0
Can We Train a Language Model Inside an End-to-End ASR Model? - Investigating Effective Implicit Language Modeling0
Can We Guide a Multi-Hop Reasoning Language Model to Incrementally Learn at Each Single-Hop?Code0
A Closer Look at Parameter Contributions When Training Neural Language and Translation Models0
A-TIP: Attribute-aware Text Infilling via Pre-trained Language Model0
ConnPrompt: Connective-cloze Prompt Learning for Implicit Discourse Relation RecognitionCode0
Asymmetric Mutual Learning for Multi-source Unsupervised Sentiment Adaptation with Dynamic Feature Network0
Underspecification in Language Modeling Tasks: A Causality-Informed Study of Gendered Pronoun ResolutionCode0
SpeechLM: Enhanced Speech Pre-Training with Unpaired Textual Data0
Augmentation Invariant Discrete Representation for Generative Spoken Language Modeling0
Learning by Distilling Context0
What Makes Pre-trained Language Models Better Zero-shot Learners?Code0
Unpacking Large Language Models with Conceptual Consistency0
Toward Trustworthy Neural Program Synthesis0
Few-shot Text Classification with Dual Contrastive Consistency0
Bidirectional Language Models Are Also Few-shot Learners0
Repairing Bugs in Python Assignments Using Large Language Models0
Spotlight: Mobile UI Understanding using Vision-Language Models with a Focus0
Supervised Contrastive Learning as Multi-Objective Optimization for Fine-Tuning Large Pre-trained Language Models0
Keyword Extraction from Short Texts with a Text-To-Text Transfer Transformer0
Who is GPT-3? An Exploration of Personality, Values and DemographicsCode0
Breaking Time Invariance: Assorted-Time Normalization for RNNsCode0
Improving alignment of dialogue agents via targeted human judgements0
ArNLI: Arabic Natural Language Inference for Entailment and Contradiction DetectionCode0
Entailment Semantics Can Be Extracted from an Ideal Language ModelCode0
End-to-End Lyrics Recognition with Self-supervised Learning0
Paraphrasing Is All You Need for Novel Object Captioning0
Learning Chess With Language Models and Transformers0
Moral Mimicry: Large Language Models Produce Moral Rationalizations Tailored to Political Identity0
LGDN: Language-Guided Denoising Network for Video-Language Modeling0
Whodunit? Learning to Contrast for Authorship AttributionCode0
Adaptation of domain-specific transformer models with text oversampling for sentiment analysis of social media posts on Covid-19 vaccinesCode0
DFX: A Low-latency Multi-FPGA Appliance for Accelerating Transformer-based Text Generation0
Deep Learning Based Page Creation for Improving E-Commerce Organic Search Traffic0
Prompting for a conversation: How to control a dialog model?0
Semantically Consistent Data Augmentation for Neural Machine Translation via Conditional Masked Language ModelCode0
WeLM: A Well-Read Pre-trained Language Model for Chinese0
Relaxed Attention for Transformer Models0
LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging0
Show:102550
← PrevPage 248 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified