SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 81018150 of 17610 papers

TitleStatusHype
Large Language Model-guided Document Selection0
Legal Documents Drafting with Fine-Tuned Pre-Trained Large Language ModelCode0
Stratified Prediction-Powered Inference for Hybrid Language Model Evaluation0
Towards Understanding Task-agnostic Debiasing Through the Lenses of Intrinsic Bias and Forgetfulness0
LLplace: The 3D Indoor Scene Layout Generation and Editing via Large Language Model0
What Languages are Easy to Language-Model? A Perspective from Learning Probabilistic Regular Languages0
Improving Audio Codec-based Zero-Shot Text-to-Speech Synthesis with Multi-Modal Context and Large Language Model0
BindGPT: A Scalable Framework for 3D Molecular Design via Language Modeling and Reinforcement Learning0
DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effective for LMMs0
Confabulation: The Surprising Value of Large Language Model Hallucinations0
Every Answer Matters: Evaluating Commonsense with Probabilistic MeasuresCode0
Are Large Language Models the New Interface for Data Pipelines?0
HORAE: A Domain-Agnostic Language for Automated Service RegulationCode0
Exploring Robustness in Doctor-Patient Conversation Summarization: An Analysis of Out-of-Domain SOAP Notes0
Error-preserving Automatic Speech Recognition of Young English Learners' LanguageCode0
Exploring User Retrieval Integration towards Large Language Models for Cross-Domain Sequential RecommendationCode0
Does your data spark joy? Performance gains from domain upsampling at the end of training0
Item-Language Model for Conversational Recommendation0
From Tarzan to Tolkien: Controlling the Language Proficiency Level of LLMs for Content Generation0
Knowledge-Infused Legal Wisdom: Navigating LLM Consultation through the Lens of Diagnostics and Positive-Unlabeled Reinforcement Learning0
Prompt-based Visual Alignment for Zero-shot Policy Transfer0
PLaD: Preference-based Large Language Model Distillation with Pseudo-Preference Pairs0
RadBARTsum: Domain Specific Adaption of Denoising Sequence-to-Sequence Models for Abstractive Radiology Report Summarization0
LLM-based Rewriting of Inappropriate Argumentation using Reinforcement Learning from Machine FeedbackCode0
Language Model Can Do Knowledge Tracing: Simple but Effective Method to Integrate Language Model and Knowledge Tracing Task0
Ranking Manipulation for Conversational Search EnginesCode0
The Task-oriented Queries Benchmark (ToQB)0
OccamLLM: Fast and Exact Language Model Arithmetic in a Single Step0
TruthEval: A Dataset to Evaluate LLM Truthfulness and ReliabilityCode0
RKLD: Reverse KL-Divergence-based Knowledge Distillation for Unlearning Personal Information in Large Language Models0
Radar Spectra-Language Model for Automotive Scene Parsing0
LongSSM: On the Length Extension of State-space Models in Language Modelling0
Self-Supervised Singing Voice Pre-Training towards Speech-to-Singing Conversion0
Phonetic Enhanced Language Modeling for Text-to-Speech Synthesis0
MaskSR: Masked Language Model for Full-band Speech Restoration0
Randomized Geometric Algebra Methods for Convex Neural NetworksCode0
Order-Independence Without Fine TuningCode0
Meta-Designing Quantum Experiments with Language Models0
Towards Effective Time-Aware Language Representation: Exploring Enhanced Temporal Understanding in Language Models0
Large Language Model-Enabled Multi-Agent Manufacturing Systems0
DrEureka: Language Model Guided Sim-To-Real Transfer0
HoneyGPT: Breaking the Trilemma in Terminal Honeypots with Large Language Model0
HPE-CogVLM: Advancing Vision Language Models with a Head Pose Grounding Task0
Edit Distance Robust Watermarks via Indexing Pseudorandom Codes0
CR-UTP: Certified Robustness against Universal Text Perturbations on Large Language ModelsCode0
Disentangling Logic: The Role of Context in Large Language Model Reasoning CapabilitiesCode0
Diver: Large Language Model Decoding with Span-Level Mutual Information Verification0
An Independence-promoting Loss for Music Generation with Language Models0
Assessing the Performance of Chinese Open Source Large Language Models in Information Extraction Tasks0
Conditional Language Learning with ContextCode0
Show:102550
← PrevPage 163 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified