SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 42014225 of 17610 papers

TitleStatusHype
CKERC : Joint Large Language Models with Commonsense Knowledge for Emotion Recognition in Conversation0
CL3DOR: Contrastive Learning for 3D Large Multimodal Models via Odds Ratio on High-Resolution Point Clouds0
CLaC-BP at SemEval-2021 Task 8: SciBERT Plus Rules for MeasEval0
CLaC @ QATS: Quality Assessment for Text Simplification0
ClaimBrush: A Novel Framework for Automated Patent Claim Refinement Based on Large Language Models0
Claim Verification using a Multi-GAN based Model0
A Context-Aware Approach for Enhancing Data Imputation with Pre-trained Language Models0
CLAIR: Evaluating Image Captions with Large Language Models0
CLAMP: Contrastive LAnguage Model Prompt-tuning0
CLAM: Selective Clarification for Ambiguous Questions with Generative Language Models0
CLaM-TTS: Improving Neural Codec Language Model for Zero-Shot Text-to-Speech0
CLAP-ART: Automated Audio Captioning with Semantic-rich Audio Representation Tokenizer0
Clarifying Implicit and Underspecified Phrases in Instructional Text0
Clarity ChatGPT: An Interactive and Adaptive Processing System for Image Restoration and Enhancement0
CLaSP: Learning Concepts for Time-Series Signals from Natural Language Supervision0
Class-Based Language Modeling for Translating into Morphologically Rich Languages0
Class-based LSTM Russian Language Model with Linguistic Information0
Class Enhancement Losses with Pseudo Labels for Zero-shot Semantic Segmentation0
Classification as Decoder: Trading Flexibility for Control in Medical Dialogue0
Classification Error Bound for Low Bayes Error Conditions in Machine Learning0
Classification, Extraction, and Normalization : CASIA_Unisound Team at the Social Media Mining for Health 2021 Shared Tasks0
Classification of Geological Borehole Descriptions Using a Domain Adapted Large Language Model0
Classification of Tweets Self-reporting Adverse Pregnancy Outcomes and Potential COVID-19 Cases Using RoBERTa Transformers0
Classifying ASR Transcriptions According to Arabic Dialect0
Classifying complex documents: comparing bespoke solutions to large language models0
Show:102550
← PrevPage 169 of 705Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified