SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1705117100 of 17610 papers

TitleStatusHype
Evaluating and Optimizing Educational Content with Large Language Model JudgmentsCode0
Improving Dialectal Slot and Intent Detection with Auxiliary Tasks: A Multi-Dialectal Bavarian Case StudyCode0
Blank Collapse: Compressing CTC emission for the faster decodingCode0
Evaluating Biases in Context-Dependent Health QuestionsCode0
An LSTM Adaptation Study of (Un)grammaticalityCode0
Generate then Refine: Data Augmentation for Zero-shot Intent DetectionCode0
BlackOut: Speeding up Recurrent Neural Network Language Models With Very Large VocabulariesCode0
Evaluating Class Membership Relations in Knowledge Graphs using Large Language ModelsCode0
Evaluating Commonsense in Pre-trained Language ModelsCode0
Improving (Dis)agreement Detection with Inductive Social Relation Information From Comment-Reply InteractionsCode0
Evaluating context-invariance in unsupervised speech representationsCode0
Evaluating Cultural Adaptability of a Large Language Model via Simulation of Synthetic PersonasCode0
Ankh: Optimized Protein Language Model Unlocks General-Purpose ModellingCode0
Evaluating Dependencies in Fact Editing for Language Models: Specificity and Implication AwarenessCode0
How does the task complexity of masked pretraining objectives affect downstream performance?Code0
Black-box language model explanation by context length probingCode0
Introducing Aspects of Creativity in Automatic Poetry GenerationCode0
Evaluating Gender Bias in German Machine TranslationCode0
Induced Model Matching: How Restricted Models Can Help Larger OnesCode0
BIRCO: A Benchmark of Information Retrieval Tasks with Complex ObjectivesCode0
Counterfactually Probing Language Identity in Multilingual ModelsCode0
Counterfactual Language Model Adaptation for Suggesting PhrasesCode0
Generating Data with Text-to-Speech and Large-Language Models for Conversational Speech RecognitionCode0
Co-STAR: Collaborative Curriculum Self-Training with Adaptive Regularization for Source-Free Video Domain AdaptationCode0
Induced Model Matching: Restricted Models Help Train Full-Featured ModelsCode0
Evaluating Language Model Character TraitsCode0
CoSQA+: Pioneering the Multi-Choice Code Search Benchmark with Test-Driven AgentsCode0
Correcting misinformation on social media with a large language modelCode0
CorefPrompt: Prompt-based Event Coreference Resolution by Measuring Event Type and Argument CompatibilitiesCode0
CORE: A Retrieve-then-Edit Framework for Counterfactual Data GenerationCode0
COrAL: Order-Agnostic Language Modeling for Efficient Iterative RefinementCode0
Evaluating Large Language Model Biases in Persona-Steered GenerationCode0
DagoBERT: Generating Derivational Morphology with a Pretrained Language ModelCode0
Generating Diverse and High-Quality Texts by Minimum Bayes Risk DecodingCode0
MCRanker: Generating Diverse Criteria On-the-Fly to Improve Point-wise LLM RankersCode0
CoPrUS: Consistency Preserving Utterance Synthesis towards more realistic benchmark dialoguesCode0
Evaluating Large Language Models with Human Feedback: Establishing a Swedish BenchmarkCode0
Evaluating Large Language Model with Knowledge Oriented Language Specific Simple Question AnsweringCode0
Induced Natural Language Rationales and Interleaved Markup Tokens Enable Extrapolation in Large Language ModelsCode0
Evaluating Methods for Extraction of Aspect Terms in Opinion Texts in Portuguese - the Challenges of Implicit AspectsCode0
Convolutional Neural Network Language ModelsCode0
Generating EDU Extracts for Plan-Guided Summary Re-RankingCode0
A Domain Knowledge Enhanced Pre-Trained Language Model for Vertical Search: Case Study on Medicinal ProductsCode0
Generating event descriptions under syntactic and semantic constraintsCode0
How Far Are LLMs from Believable AI? A Benchmark for Evaluating the Believability of Human Behavior SimulationCode0
Inducing brain-relevant bias in natural language processing modelsCode0
An Investigation of Noise in Morphological InflectionCode0
Convolutional Neural Network for Paraphrase IdentificationCode0
Investigating the Performance of Language Models for Completing Code in Functional Programming Languages: a Haskell Case StudyCode0
Generating Hypothetical Events for Abductive InferenceCode0
Show:102550
← PrevPage 342 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified