SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1580115850 of 17610 papers

TitleStatusHype
Think Like a Person Before Responding: A Multi-Faceted Evaluation of Persona-Guided LLMs for Countering HateCode0
Leveraging Web-Crawled Data for High-Quality Fine-TuningCode0
Leveraging Unit Language Guidance to Advance Speech Modeling in Textless Speech-to-Speech TranslationCode0
MarSan at SemEval-2022 Task 11: Multilingual complex named entity recognition using T5 and transformer encoderCode0
Task-Informed Anti-Curriculum by Masking Improves Downstream Performance on TextCode0
debiaSAE: Benchmarking and Mitigating Vision-Language Model BiasCode0
Task Loss Estimation for Sequence PredictionCode0
Simple Unsupervised Summarization by Contextual MatchingCode0
Online Detecting LLM-Generated Texts via Sequential Hypothesis Testing by BettingCode0
Online Back-Parsing for AMR-to-Text GenerationCode0
Task Refinement Learning for Improved Accuracy and Stability of Unsupervised Domain AdaptationCode0
TaskSet: A Dataset of Optimization TasksCode0
Dialogue-adaptive Language Model Pre-training From Quality EstimationCode0
Transformer-Based Approaches for Automatic Music TranscriptionCode0
Simplifying Scholarly Abstracts for Accessible Digital LibrariesCode0
Mapping and Cleaning Open Commonsense Knowledge Bases with Generative TranslationCode0
On Extractive and Abstractive Neural Document Summarization with Transformer Language ModelsCode0
On Effects of Steering Latent Representation for Large Language Model UnlearningCode0
Leveraging Training Data in Few-Shot Prompting for Numerical ReasoningCode0
One2set + Large Language Model: Best Partners for Keyphrase GenerationCode0
MAPLE: Mobile App Prediction Leveraging Large Language Model EmbeddingsCode0
Third-Party Aligner for Neural Word AlignmentsCode0
Third-Party Language Model Performance Prediction from InstructionCode0
This Land is Your, My Land: Evaluating Geopolitical Biases in Language ModelsCode0
Leaking LoRa: An Evaluation of Password Leaks and Knowledge Storage in Large Language ModelsCode0
Leveraging Social Determinants of Health in Alzheimer's Research Using LLM-Augmented Literature Mining and Knowledge GraphsCode0
On-Device Neural Language Model Based Word PredictionCode0
Towards Personalized Evaluation of Large Language Models with An Anonymous Crowd-Sourcing PlatformCode0
On-Device LLM for Context-Aware Wi-Fi RoamingCode0
"I've Heard of You!": Generate Spoken Named Entity Recognition Data for Unseen EntitiesCode0
Manifold-Preserving Transformers are Effective for Short-Long Range EncodingCode0
Know Your Needs Better: Towards Structured Understanding of Marketer Demands with Analogical Reasoning Augmented LLMsCode0
Single Headed Attention RNN: Stop Thinking With Your HeadCode0
MANGO: A Benchmark for Evaluating Mapping and Navigation Abilities of Large Language ModelsCode0
LayoutLMv3: Pre-training for Document AI with Unified Text and Image MaskingCode0
On-Device Collaborative Language Modeling via a Mixture of Generalists and SpecialistsCode0
Understanding Hidden Computations in Chain-of-Thought ReasoningCode0
Exploring the Value of Pre-trained Language Models for Clinical Named Entity RecognitionCode0
SJ_AJ@DravidianLangTech-EACL2021: Task-Adaptive Pre-Training of Multilingual BERT models for Offensive Language IdentificationCode0
Language Model is a Branch Predictor for Simultaneous Machine TranslationCode0
LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document UnderstandingCode0
Teaching a Multilingual Large Language Model to Understand Multilingual Speech via Multi-Instructional TrainingCode0
Teaching Autoregressive Language Models Complex Tasks By DemonstrationCode0
Knowledge-to-Jailbreak: Investigating Knowledge-driven Jailbreaking Attacks for Large Language ModelsCode0
Sketch-Guided Constrained Decoding for Boosting Blackbox Large Language Models without Logit AccessCode0
On Architectures for Including Visual Information in Neural Language Models for Image DescriptionCode0
Teaching Large Language Models to Self-DebugCode0
On Anytime Learning at MacroscaleCode0
Skim-Attention: Learning to Focus via Document LayoutCode0
Contextual Knowledge Pursuit for Faithful Visual SynthesisCode0
Show:102550
← PrevPage 317 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified