SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 42514300 of 17610 papers

TitleStatusHype
CLIPose: Category-Level Object Pose Estimation with Pre-trained Vision-Language Knowledge0
CLIPPING: Distilling CLIP-Based Models With a Student Base for Video-Language Retrieval0
CLIP-S^4: Language-Guided Self-Supervised Semantic Segmentation0
CLIP-S4: Language-Guided Self-Supervised Semantic Segmentation0
CLIPTER: Looking at the Bigger Picture in Scene Text Recognition0
CLIPtortionist: Zero-shot Text-driven Deformation for Manufactured 3D Shapes0
CLIPXPlore: Coupled CLIP and Shape Spaces for 3D Shape Exploration0
CLLMFS: A Contrastive Learning enhanced Large Language Model Framework for Few-Shot Named Entity Recognition0
CL-MoE: Enhancing Multimodal Large Language Model with Dual Momentum Mixture-of-Experts for Continual Visual Question Answering0
Cloning Ideology and Style using Deep Learning0
Closing Brackets with Recurrent Neural Networks0
ClothPPO: A Proximal Policy Optimization Enhancing Framework for Robotic Cloth Manipulation with Observation-Aligned Action Spaces0
Cloud-Based Real-Time Molecular Screening Platform with MolFormer0
CLOWER: A Pre-trained Language Model with Contrastive Learning over Word and Character Representations0
CLP at SemEval-2019 Task 3: Multi-Encoder in Hierarchical Attention Networks for Contextual Emotion Detection0
CLPLM: Character Level Pretrained Language Model for ExtractingSupport Phrases for Sentiment Labels0
CLPLM: Character Level Pretrained Language Model for Extracting Support Phrases for Sentiment Labels0
CL-ReKD: Cross-lingual Knowledge Distillation for Multilingual Retrieval Question Answering0
SynCoBERT: Syntax-Guided Multi-Modal Contrastive Pre-Training for Code Representation0
CLSP: High-Fidelity Contrastive Language-State Pre-training for Agent State Representation0
CLST: Cold-Start Mitigation in Knowledge Tracing by Aligning a Generative Language Model as a Students' Knowledge Tracer0
[CLS] Token is All You Need for Zero-Shot Semantic Segmentation0
CLUE: Neural Networks Calibration via Learning Uncertainty-Error alignment0
CLUF: a Neural Model for Second Language Acquisition Modeling0
Cluster-Former: Clustering-based Sparse Transformer for Long-Range Dependency Encoding0
Clustering Algorithms and RAG Enhancing Semi-Supervised Text Classification with Large LLMs0
Clustering and Median Aggregation Improve Differentially Private Inference0
Cluster Language Model for Improved E-Commerce Retrieval and Ranking: Leveraging Query Similarity and Fine-Tuning for Personalized Results0
ClusTop: An unsupervised and integrated text clustering and topic extraction framework0
ClusTR: Exploring Efficient Self-attention via Clustering for Vision Transformers0
CMATH: Can Your Language Model Pass Chinese Elementary School Math Test?0
CMed-GPT: Prompt Tuning for Entity-Aware Chinese Medical Dialogue Generation0
CMLFormer: A Dual Decoder Transformer with Switching Point Learning for Code-Mixed Language Modeling0
CMLM-CSE: Based on Conditional MLM Contrastive Learning for Sentence Embeddings0
CMUQ@QALB-2014: An SMT-based System for Automatic Arabic Error Correction0
CMV-BERT: Contrastive multi-vocab pretraining of BERT0
CNO-LSTM: A Chaotic Neural Oscillatory Long Short-Term Memory Model for Text Classification0
CoAM: Corpus of All-Type Multiword Expressions0
Coarse-to-Fine Highlighting: Reducing Knowledge Hallucination in Large Language Models0
TANTE: Time-Adaptive Operator Learning via Neural Taylor Expansion0
CoAVT: A Cognition-Inspired Unified Audio-Visual-Text Pre-Training Model for Multimodal Processing0
CobaltF: A Fluent Metric for MT Evaluation0
The Curious Case of Class Accuracy Imbalance in LLMs: Post-hoc Debiasing via Nonlinear Integer Programming0
CoCo-Bench: A Comprehensive Code Benchmark For Multi-task Large Language Model Evaluation0
CoCo-BERT: Improving Video-Language Pre-training with Contrastive Cross-modal Matching and Denoising0
Code-as-Monitor: Constraint-aware Visual Programming for Reactive and Proactive Robotic Failure Detection0
CodeBPE: Investigating Subtokenization Options for Large Language Model Pretraining on Source Code0
CodeBPE: Investigating Subtokenization Options for Large Language Model Pretraining on Source Code0
Codecfake: An Initial Dataset for Detecting LLM-based Deepfake Audio0
Code Evolution Graphs: Understanding Large Language Model Driven Design of Algorithms0
Show:102550
← PrevPage 86 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified