SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 92019250 of 17610 papers

TitleStatusHype
Extracting Weighted Language Lexicons from Wikipedia0
Extraction of Bilingual Technical Terms for Chinese-Japanese Patent Translation0
Extraction of Sleep Information from Clinical Notes of Patients with Alzheimer's Disease Using Natural Language Processing0
Extractive and Abstractive Explanations for Fact-Checking and Evaluation of News0
Extractive Summarisation Based on Keyword Profile and Language Model0
Extractive Summary as Discrete Latent Variables0
Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback0
Extrapolating Multilingual Understanding Models as Multilingual Generators0
Extremely Small BERT Models from Mixed-Vocabulary Training0
Eye-SpatialNet: Spatial Information Extraction from Ophthalmology Notes0
EzSQL: An SQL intermediate representation for improving SQL-to-text Generation0
FAA Framework: A Large Language Model-Based Approach for Credit Card Fraud Investigations0
FABULA: Intelligence Report Generation Using Retrieval-Augmented Narrative Construction0
FAC^2E: Better Understanding Large Language Model Capabilities by Dissociating Language and Cognition0
FaceInsight: A Multimodal Large Language Model for Face Perception0
Synergizing Large Language Models and Task-specific Models for Time Series Anomaly Detection0
Facilitating Holistic Evaluations with LLMs: Insights from Scenario-Based Experiments0
Facilitating Self-Guided Mental Health Interventions Through Human-Language Model Interaction: A Case Study of Cognitive Restructuring0
Facilitating Video Story Interaction with Multi-Agent Collaborative System0
FactBench: A Dynamic Benchmark for In-the-Wild Language Model Factuality Evaluation0
FactCheXcker: Mitigating Measurement Hallucinations in Chest X-ray Report Generation Models0
FACTIFY-5WQA: 5W Aspect-based Fact Verification through Question Answering0
Using Large Language Model for End-to-End Chinese ASR and NER0
Using Large Language Models for (De-)Formalization and Natural Argumentation Exercises for Beginner's Students0
Unnatural language processing: How do language models handle machine-generated prompts?0
Using Large Language Models to Automate and Expedite Reinforcement Learning with Reward Machine0
Using Large Language Models to Generate Authentic Multi-agent Knowledge Work Datasets0
Using large language models to produce literature reviews: Usages and systematic biases of microphysics parametrizations in 2699 publications0
Using Large Language Models to Provide Explanatory Feedback to Human Tutors0
Unpacking Large Language Models with Conceptual Consistency0
Using Large Language Models to Support Thematic Analysis in Empirical Legal Studies0
Using Large Language Model to Solve and Explain Physics Word Problems Approaching Human Level0
Using Large Pre-Trained Language Model to Assist FDA in Premarket Medical Device0
Using LLMs to discover emerging coded antisemitic hate-speech in extremist social media0
Using LLMs to Infer Non-Binary COVID-19 Sentiments of Chinese Micro-bloggers0
Using LLMs to Model the Beliefs and Preferences of Targeted Populations0
Using Morphological Knowledge in Open-Vocabulary Neural Language Models0
Using neural topic models to track context shifts of words: a case study of COVID-related terms before and after the lockdown in April 20200
Using PPM for Health Related Text Detection0
Unicoder-VL: A Universal Encoder for Vision and Language by Cross-modal Pre-training0
Using Pretrained Large Language Model with Prompt Engineering to Answer Biomedical Questions0
Using Prompts to Guide Large Language Models in Imitating a Real Person's Language Style0
Using Related Languages to Enhance Statistical Language Models0
Using Selective Masking as a Bridge between Pre-training and Fine-tuning0
Using SMT for OCR Error Correction of Historical Texts0
Using Social Media For Bitcoin Day Trading Behavior Prediction0
Using Structured Content Plans for Fine-grained Syntactic Control in Pretrained Language Model Generation0
Using Structured Content Plans for Fine-grained Syntactic Control in Pretrained Language Model Generation0
Using sub-word n-gram models for dealing with OOV in large vocabulary speech recognition for Latvian0
Using Syntax-Based Machine Translation to Parse English into Abstract Meaning Representation0
Show:102550
← PrevPage 185 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified