SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1350113550 of 17610 papers

TitleStatusHype
Generative Question Answering: Learning to Answer the Whole Question0
Generative Recommendation with Continuous-Token Diffusion0
Generative Regression Based Watch Time Prediction for Short-Video Recommendation0
Generative Relevance Feedback with Large Language Models0
Generative retrieval-augmented ontologic graph and multi-agent strategies for interpretive large language model-based materials design0
Generative Sentiment Transfer via Adaptive Masking0
Generative Spoken Dialogue Language Modeling0
Generative Spoken Language Model based on continuous word-sized audio tokens0
Generative Technology for Human Emotion Recognition: A Scope Review0
Generative Text Steganography with Large Language Model0
Generative Timelines for Instructed Visual Assembly0
GeneSUM: Large Language Model-based Gene Summary Extraction0
GeNet: A Multimodal LLM-Based Co-Pilot for Network Topology and Configuration0
GenFollower: Enhancing Car-Following Prediction with Large Language Models0
Text Generation with Diffusion Language Models: A Pre-training Approach with Continuous Paragraph Denoise0
GenSE: Generative Speech Enhancement via Language Models using Hierarchical Modeling0
GenSpectrum Chat: Data Exploration in Public Health Using Large Language Models0
GenTAL: Generative Denoising Skip-gram Transformer for Unsupervised Binary Code Similarity Detection0
GenTorrent: Scaling Large Language Model Serving with An Overley Network0
GenTUS: Simulating User Behaviour and Language in Task-oriented Dialogues with Generative Transformers0
GenX: Mastering Code and Test Generation with Execution Feedback0
Gen-Z: Generative Zero-Shot Text Classification with Contextualized Label Descriptions0
GeoCode-GPT: A Large Language Model for Geospatial Code Generation Tasks0
GeoDANO: Geometric VLM with Domain Agnostic Vision Encoder0
GeoMag: A Vision-Language Model for Pixel-level Fine-Grained Remote Sensing Image Parsing0
Geometry Informed Tokenization of Molecules for Language Model Generation0
Geometry is All You Need: A Unified Taxonomy of Matrix and Tensor Factorization for Compression of Generative Language Models0
GeoPix: Multi-Modal Large Language Model for Pixel-level Image Understanding in Remote Sensing0
GeoReasoner: Reasoning On Geospatially Grounded Context For Natural Language Understanding0
GeoRecon: Graph-Level Representation Learning for 3D Molecules via Reconstruction-Based Pretraining0
GeoRSMLLM: A Multimodal Large Language Model for Vision-Language Tasks in Geoscience and Remote Sensing0
German and English Treebanks and Lexica for Tree-Adjoining Grammars0
German BERT Model for Legal Named Entity Recognition0
German FinBERT: A German Pre-trained Language Model0
GersteinLab at MEDIQA-Chat 2023: Clinical Note Summarization from Doctor-Patient Conversations through Fine-tuning and In-context Learning0
Gesture-Aware Zero-Shot Speech Recognition for Patients with Language Disorders0
Get more for less: Principled Data Selection for Warming Up Fine-Tuning in LLMs0
Get the gist? Using large language models for few-shot decontextualization0
Getting to Production with Few-shot Natural Language Generation Models0
GhostWriter: Using an LSTM for Automatic Rap Lyric Generation0
GIELLM: Japanese General Information Extraction Large Language Model Utilizing Mutual Reinforcement Effect0
GigaChat Family: Efficient Russian Language Modeling Through Mixture of Experts Architecture0
GiusBERTo: A Legal Language Model for Personal Data De-identification in Italian Court of Auditors Decisions0
Giving Simulated Cells a Voice: Evolving Prompt-to-Intervention Models for Cellular Control0
GKS: Graph-based Knowledge Selector for Task-oriented Dialog System0
GLaM: Efficient Scaling of Language Models with Mixture-of-Experts0
Glauber Generative Model: Discrete Diffusion Models via Binary Classification0
GL-Fusion: Rethinking the Combination of Graph Neural Network and Large Language model0
GLM: General Language Model Pretraining with Autoregressive Blank Infilling0
Global and Local Feature Learning for Ego-Network Analysis0
Show:102550
← PrevPage 271 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified