SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1270112750 of 17610 papers

TitleStatusHype
LLMs Plagiarize: Ensuring Responsible Sourcing of Large Language Model Training Data Through Knowledge Graph Comparison0
Enterprise Large Language Model Evaluation Benchmark0
Entity and Evidence Guided Document-Level Relation Extraction0
Entity and Evidence Guided Relation Extraction for DocRED0
Entity-aware ELMo: Learning Contextual Entity Representation for Entity Disambiguation0
Entity-Aware Language Model as an Unsupervised Reranker0
Entity Context Graph: Learning Entity Representations fromSemi-Structured Textual Sources on the Web0
Entity Decisions in Neural Language Modelling: Approaches and Problems0
Entity Relative Position Representation based Multi-head Selection for Joint Entity and Relation Extraction0
Entity Retrieval via Entity Factoid Hierarchy0
Entity Type Prediction Leveraging Graph Walks and Entity Descriptions0
EntroLLM: Entropy Encoded Weight Compression for Efficient Large Language Model Inference on Edge Devices0
Entropy Adaptive Decoding: Dynamic Model Switching for Efficient Inference0
Entropy-based Exploration Conduction for Multi-step Reasoning0
Entropy-based Pruning for Phrase-based Machine Translation0
Entropy-Based Subword Mining with an Application to Word Embeddings0
Entropy-guided sequence weighting for efficient exploration in RL-based LLM fine-tuning0
EntropyRank: Unsupervised Keyphrase Extraction via Side-Information Optimization for Language Model-based Text Compression0
Entropy Rate Estimation for Markov Chains with Large State Space0
Envisioning MedCLIP: A Deep Dive into Explainability for Medical Vision-Language Models0
EO-VLM: VLM-Guided Energy Overload Attacks on Vision Models0
EPIC-KITCHENS-100 Unsupervised Domain Adaptation Challenge: Mixed Sequences Prediction0
Epigenomic language models powered by Cerebras0
EpilepsyLLM: Domain-Specific Large Language Model Fine-tuned with Epilepsy Medical Knowledge0
Episodic Memory Verbalization using Hierarchical Representations of Life-Long Robot Experience0
Equipping Educational Applications with Domain Knowledge0
Equipping Language Models with Tool Use Capability for Tabular Data Analysis in Finance0
ER3: A Unified Framework for Event Retrieval, Recognition and Recounting0
ERABAL: Enhancing Role-Playing Agents through Boundary-Aware Learning0
EriBERTa: A Bilingual Pre-Trained Language Model for Clinical Natural Language Processing0
ERNIE at SemEval-2020 Task 10: Learning Word Emphasis Selection by Pre-trained Language Model0
ERNIE-NLI: Analyzing the Impact of Domain-Specific External Knowledge on Enhanced Representations for NLI0
ERNIE-UniX2: A Unified Cross-lingual Cross-modal Framework for Understanding and Generation0
Error-Correcting Codes For Approximate Neural Sequence Prediction0
Error-Correcting Neural Sequence Prediction0
Error Correction Environment for the Polish Parliamentary Corpus0
Error Detection in Automatic Speech Recognition0
Error Norm Truncation: Robust Training in the Presence of Data Noise for Text Generation Models0
Er ... well, it matters, right? On the role of data representations in spoken language dependency parsing0
ESALE: Enhancing Code-Summary Alignment Learning for Source Code Summarization0
Escaping Collapse: The Strength of Weak Data for Large Language Model Training0
ESC: Exploration with Soft Commonsense Constraints for Zero-shot Object Navigation0
esCorpius: A Massive Spanish Crawling Corpus0
ESGBERT: Language Model to Help with Classification Tasks Related to Companies Environmental, Social, and Governance Practices0
ESG Sentiment Analysis: comparing human and language model performance including GPT0
ESLM: Risk-Averse Selective Language Modeling for Efficient Pretraining0
E-Sparse: Boosting the Large Language Model Inference through Entropy-based N:M Sparsity0
ESPnet-SpeechLM: An Open Speech Language Model Toolkit0
Espresso: High Compression For Rich Extraction From Videos for Your Vision-Language Model0
Establishing Task Scaling Laws via Compute-Efficient Model Ladders0
Show:102550
← PrevPage 255 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified