SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 75517600 of 17610 papers

TitleStatusHype
Studying the Effects of Cognitive Biases in Evaluation of Conversational Agents0
Studying the impacts of pre-training using ChatGPT-generated text on downstream tasks0
Studying the Role of Input-Neighbor Overlap in Retrieval-Augmented Language Models Training Efficiency0
Interacting with next-phrase suggestions: How suggestion systems aid and influence the cognitive processes of writing0
Stuffed Mamba: State Collapse and State Capacity of RNN-Based Long-Context Modeling0
Style Attuned Pre-training and Parameter Efficient Fine-tuning for Spoken Language Understanding0
StyleBERT: Chinese pretraining by font style information0
StyleCap: Automatic Speaking-Style Captioning from Speech Based on Speech and Language Self-supervised Learning Models0
Style-Compress: An LLM-Based Prompt Compression Framework Considering Task-Specific Styles0
StyleDistance: Stronger Content-Independent Style Embeddings with Synthetic Parallel Examples0
STYLE: Improving Domain Transferability of Asking Clarification Questions in Large Language Model Powered Conversational Agents0
StyleInject: Parameter Efficient Tuning of Text-to-Image Diffusion Models0
Style-Talker: Finetuning Audio Language Model and Style-Based Text-to-Speech Model for Fast Spoken Dialogue Generation0
Style Variation as a Vantage Point for Code-Switching0
Stylistic Variation in Television Dialogue for Natural Language Generation0
Stylometry in a Bilingual Setup0
Sub-character Neural Language Modelling in Japanese0
Subformer: A Parameter Reduced Transformer0
SubICap: Towards Subword-informed Image Captioning0
Sub-lexical Dialogue Act Classification in a Spoken Dialogue System Support for the Elderly with Cognitive Disabilities0
Submix: Practical Private Prediction for Large-Scale Language Models0
Submodularity for Data Selection in Machine Translation0
Subsegmental language detection in Celtic language text0
Subspace Chronicles: How Linguistic Information Emerges, Shifts and Interacts during Language Model Training0
SubstationAI: Multimodal Large Model-Based Approaches for Analyzing Substation Equipment Faults0
Subword and Crossword Units for CTC Acoustic Models0
Subword Embedding from Bytes Gains Privacy without Sacrificing Accuracy and Complexity0
Sub-Word Similarity based Search for Embeddings: Inducing Rare-Word Embeddings for Word Similarity Tasks and Language Modelling0
Successor Features for Efficient Multisubject Controlled Text Generation0
Successor Heads: Recurring, Interpretable Attention Heads In The Wild0
Succinct Data Structures for NLP-at-Scale0
sudoLLM : On Multi-role Alignment of Language Models0
SudoLM: Learning Access Control of Parametric Knowledge with Authorization Alignment0
Suffix Trees as Language Models0
SunBear at WNUT-2020 Task 2: Improving BERT-Based Noisy Text Classification with Knowledge of the Data domain0
SuperCLUE: A Comprehensive Chinese Large Language Model Benchmark0
Superhuman performance in urology board questions by an explainable large language model enabled for context integration of the European Association of Urology guidelines: the UroBot study0
Superhuman performance of a large language model on the reasoning tasks of a physician0
Supermind Ideator: Exploring generative AI to support creative problem-solving0
SuperOCR for ALTA 2017 Shared Task0
Super-Prompting: Utilizing Model-Independent Contextual Data to Reduce Data Annotation Required in Visual Commonsense Tasks0
Supersense Tagging with a Combination of Character, Subword, and Word-level Representations0
SuperTweetEval: A Challenging, Unified and Heterogeneous Benchmark for Social Media NLP Research0
Supervised and Unsupervised Minimalist Quality Estimators: Vicomtech's Participation in the WMT 2018 Quality Estimation Task0
Supervised classification of end-of-lines in clinical text with no manual annotation0
Supervised Contrastive Learning as Multi-Objective Optimization for Fine-Tuning Large Pre-trained Language Models0
Supervised Sentence Fusion with Single-Stage Inference0
Supporting Cross-language Cross-project Bug Localization Using Pre-trained Language Models0
Supporting Human-AI Collaboration in Auditing LLMs with LLMs0
Supporting Sensemaking of Large Language Model Outputs at Scale0
Show:102550
← PrevPage 152 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified