SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 45014550 of 17610 papers

TitleStatusHype
An Empirical Study Of Self-supervised Learning Approaches For Object Detection With TransformersCode0
Few-shot learning through contextual data augmentationCode0
Few-Shot NLG with Pre-Trained Language ModelCode0
BTRec: BERT-Based Trajectory Recommendation for Personalized ToursCode0
An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Language Model InferenceCode0
Few-Shot Upsampling for Protest Size DetectionCode0
A Comparison of Adaptation Techniques and Recurrent Neural Network ArchitecturesCode0
An Empirical Study on Pre-trained Embeddings and Language Models for Bot DetectionCode0
A Comparison of Centrality Measures for Graph-Based Keyphrase ExtractionCode0
FGeo-DRL: Deductive Reasoning for Geometric Problems through Deep Reinforcement LearningCode0
FIDAVL: Fake Image Detection and Attribution using Vision-Language ModelCode0
Fidelity-Enriched Contrastive Search: Reconciling the Faithfulness-Diversity Trade-Off in Text GenerationCode0
Building a Swedish Open-Domain Conversational Language ModelCode0
Building a Taiwanese Mandarin Spoken Language Model: A First AttemptCode0
Figuratively Speaking: Authorship Attribution via Multi-Task Figurative Language ModelingCode0
CURIE: An Iterative Querying Approach for Reasoning About SituationsCode0
FIBER: Fill-in-the-Blanks as a Challenging Video Understanding Evaluation FrameworkCode0
A dataset and exploration of models for understanding video data through fill-in-the-blank question-answeringCode0
An End-to-End Model for Photo-Sharing Multi-modal Dialogue GenerationCode0
Curriculum learning for language modelingCode0
cushLEPOR: customising hLEPOR metric using Optuna for higher agreement with human judgments or pre-trained language model LaBSECode0
FinBERT: Financial Sentiment Analysis with Pre-trained Language ModelsCode0
FiNCAT: Financial Numeral Claim Analysis ToolCode0
An End-to-End Neural Network for Polyphonic Piano Music TranscriptionCode0
Finding a Needle in the Adversarial Haystack: A Targeted Paraphrasing Approach For Uncovering Edge Cases with Minimal Distribution DistortionCode0
Customising General Large Language Models for Specialised Emotion Recognition TasksCode0
Finding Function in Form: Compositional Character Models for Open Vocabulary Word RepresentationCode0
Finding Hierarchical Structure in Neural Stacks Using Unsupervised ParsingCode0
A Tailored Pre-Training Model for Task-Oriented Dialog GenerationCode0
Building Language Models for Text with Named EntitiesCode0
A Targeted Assessment of Incremental Processing in Neural LanguageModels and HumansCode0
Finding Syntactic Representations in Neural StacksCode0
AdCare-VLM: Leveraging Large Vision Language Model (LVLM) to Monitor Long-Term Medication Adherence and CareCode0
B-VLLM: A Vision Large Language Model with Balanced Spatio-Temporal TokensCode0
BvSP: Broad-view Soft Prompting for Few-Shot Aspect Sentiment Quad PredictionCode0
FineDeb: A Debiasing Framework for Language ModelsCode0
Fine-Grained Behavior Simulation with Role-Playing Large Language Model on Social MediaCode0
Fine-grained Contrastive Learning for Relation ExtractionCode0
Fine-Grained Emotion Prediction by Modeling Emotion DefinitionsCode0
An Ensemble Approach to Acronym Extraction using TransformersCode0
CXP949 at WNUT-2020 Task 2: Extracting Informative COVID-19 Tweets -- RoBERTa Ensembles and The Continued Relevance of Handcrafted FeaturesCode0
Cyclical Annealing Schedule: A Simple Approach to Mitigating KL VanishingCode0
Cynical Selection of Language Model Training DataCode0
A Comparison of Large Language Model and Human Performance on Random Number Generation TasksCode0
A Comparison of Methods for Evaluating Generative IRCode0
A Theoretically Grounded Application of Dropout in Recurrent Neural NetworksCode0
A Neural Language Model for Dynamically Representing the Meanings of Unknown Words and Entities in a DiscourseCode0
CABACE: Injecting Character Sequence Information and Domain Knowledge for Enhanced Acronym and Long-Form ExtractionCode0
Fine-tuning Pre-Trained Transformer Language Models to Distantly Supervised Relation ExtractionCode0
Comparing Specialised Small and General Large Language Models on Text Classification: 100 Labelled Samples to Achieve Break-Even PerformanceCode0
Show:102550
← PrevPage 91 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified