SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1360113650 of 17610 papers

TitleStatusHype
What does BERT Learn from Arabic Machine Reading Comprehension Datasets?0
Introducing A large Tunisian Arabizi Dialectal Dataset for Sentiment Analysis0
Quranic Verses Semantic Relatedness Using AraBERT0
Multilingual Slavic Named Entity Recognition0
BERTić - The Transformer Language Model for Bosnian, Croatian, Montenegrin and Serbian0
Benchmarking Pre-trained Language Models for Multilingual NER: TraSpaS at the BSNLP2021 Shared TaskCode0
Efficient Unsupervised NMT for Related Languages with Cross-Lingual Language Models and Fidelity Objectives0
DLRG@DravidianLangTech-EACL2021: Transformer based approachfor Offensive Language Identification on Code-Mixed Tamil0
MUCS@LT-EDI-EACL2021:CoHope-Hope Speech Detection for Equality, Diversity, and Inclusion in Code-Mixed Texts0
TEAM HUB@LT-EDI-EACL2021: Hope Speech Detection Based On Pre-trained Language Model0
Maoqin @ DravidianLangTech-EACL2021: The Application of Transformer-Based Model0
Pseudo-Label Guided Unsupervised Domain Adaptation of Contextual Embeddings0
Story Centaur: Large Language Model Few Shot Learning as a Creative Writing Tool0
Maximal Multiverse Learning for Promoting Cross-Task Generalization of Fine-Tuned Language Models0
On the Computational Modelling of Michif Verbal Morphology0
PunKtuator: A Multilingual Punctuation Restoration System for Spoken and Written TextCode0
Is Supervised Syntactic Parsing Beneficial for Language Understanding Tasks? An Empirical Investigation0
Structural Encoding and Pre-training Matter: Adapting BERT for Table-Based Fact Verification0
Keep Learning: Self-supervised Meta-learning for Learning from Inference0
NewsMTSC: A Dataset for (Multi-)Target-dependent Sentiment Classification in Political News ArticlesCode1
Globalizing BERT-based Transformer Architectures for Long Document Summarization0
Does She Wink or Does She Nod? A Challenging Benchmark for Evaluating Word Understanding of Language Models0
Exploiting Cloze-Questions for Few-Shot Text Classification and Natural Language Inference0
Detecting over/under-translation errors for determining adequacy in human translations0
CURIE: An Iterative Querying Approach for Reasoning About SituationsCode0
Expressive Text-to-Speech using Style Tag0
Canonical and Surface Morphological Segmentation for Nguni LanguagesCode0
Low-Resource Language Modelling of South African LanguagesCode0
UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training0
Multi-Encoder Learning and Stream Fusion for Transformer-Based End-to-End Automatic Speech Recognition0
Few-shot learning through contextual data augmentationCode0
AfriKI: Machine-in-the-Loop Afrikaans Poetry Generation0
Self-supervised Image-text Pre-training With Mixed Data In Chest X-rays0
XRJL-HKUST at SemEval-2021 Task 4: WordNet-Enhanced Dual Multi-head Co-Attention for Reading Comprehension of Abstract MeaningCode0
Entity Context Graph: Learning Entity Representations fromSemi-Structured Textual Sources on the Web0
Retraining DistilBERT for a Voice Shopping Assistant by Using Universal Dependencies0
[Re] Rigging the Lottery: Making All Tickets WinnersCode1
BART based semantic correction for Mandarin automatic speech recognition system0
Correcting Automated and Manual Speech Transcription Errors using Warped Language Models0
K-XLNet: A General Method for Combining Explicit Knowledge with Language Model Pretraining0
Visual Grounding Strategies for Text-Only Natural Language Processing0
An Approach to Improve Robustness of NLP Systems against ASR Errors0
FastMoE: A Fast Mixture-of-Expert Training SystemCode2
Finetuning Pretrained Transformers into RNNsCode1
Low-Resource Machine Translation Training Curriculum Fit for Low-Resource Languages0
Thinking Aloud: Dynamic Context Generation Improves Zero-Shot Reasoning Performance of GPT-20
Hallucination of speech recognition errors with sequence to sequence learning0
Variable Name Recovery in Decompiled Binary Code using Constrained Masked Language Modeling0
Nutri-bullets: Summarizing Health Studies by Composing SegmentsCode0
Attribute Alignment: Controlling Text Generation from Pre-trained Language ModelsCode0
Show:102550
← PrevPage 273 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified