SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 65016550 of 17610 papers

TitleStatusHype
Linking Theories and Methods in Cognitive Sciences via Joint Embedding of the Scientific Literature: The Example of Cognitive ControlCode0
PROP: Pre-training with Representative Words Prediction for Ad-hoc RetrievalCode0
PROPS: Probabilistic personalization of black-box sequence modelsCode0
Prose2Poem: The Blessing of Transformers in Translating Prose to Persian PoetryCode0
LLM vs. Lawyers: Identifying a Subset of Summary Judgments in a Large UK Case Law DatasetCode0
Semantically Consistent Data Augmentation for Neural Machine Translation via Conditional Masked Language ModelCode0
Randomized Geometric Algebra Methods for Convex Neural NetworksCode0
To Tell The Truth: Language of Deception and Language ModelsCode0
Mistral-SPLADE: LLMs for better Learned Sparse RetrievalCode0
Mitigate Replication and Copying in Diffusion Models with Generalized Caption and Dual Fusion EnhancementCode0
Prosody Analysis of AudiobooksCode0
PerSRV: Personalized Sticker Retrieval with Vision-Language ModelCode0
PERT: A New Solution to Pinyin to Character Conversion TaskCode0
Can Out-of-Domain data help to Learn Domain-Specific Prompts for Multimodal Misinformation Detection?Code0
Perturbed Masking: Parameter-free Probing for Analyzing and Interpreting BERTCode0
Perturbing Inputs for Fragile Interpretations in Deep Natural Language ProcessingCode0
To Tune or Not To Tune? How About the Best of Both Worlds?Code0
Story Ending Prediction by Transferable BERTCode0
Multi-Granularity Structural Knowledge Distillation for Language Model CompressionCode0
StrassenNets: Deep Learning with a Multiplication BudgetCode0
Prospect Personalized Recommendation on Large Language Model-based Agent PlatformCode0
Multi-Granularity Tibetan Textual Adversarial Attack Method Based on Masked Language ModelCode0
TourSynbio-Search: A Large Language Model Driven Agent Framework for Unified Search Method for Protein EngineeringCode0
Self-Training Pre-Trained Language Models for Zero- and Few-Shot Multi-Dialectal Arabic Sequence LabelingCode0
ProSPer: Probing Human and Neural Network Language Model Understanding of Spatial PerspectiveCode0
Streaming Joint Speech Recognition and Disfluency DetectionCode0
Multi-Task Deep Neural Networks for Natural Language UnderstandingCode0
Prot2Chat: Protein LLM with Early-Fusion of Text, Sequence and StructureCode0
Reinforced Large Language Model is a formal theorem proverCode0
Self-training Large Language Models through Knowledge DetectionCode0
Self-training Improves Pre-training for Few-shot Learning in Task-oriented Dialog SystemsCode0
Leveraging Open Information Extraction for More Robust Domain Transfer of Event Trigger DetectionCode0
Measuring Social Biases in Masked Language Models by Proxy of Prediction QualityCode0
Recurrent Highway NetworksCode0
ÚFAL CorPipe at CRAC 2023: Larger Context Improves Multilingual Coreference ResolutionCode0
Toward a Thermodynamics of MeaningCode0
PhayaThaiBERT: Enhancing a Pretrained Thai Language Model with Unassimilated LoanwordsCode0
Self-Train Before You TranscribeCode0
StrucTexT: Structured Text Understanding with Multi-Modal TransformersCode0
StrucTexTv2: Masked Visual-Textual Prediction for Document Image Pre-trainingCode0
PHD: Pixel-Based Language Modeling of Historical DocumentsCode0
Protecting multimodal large language models against misleading visualizationsCode0
Rethinking the Event Coding Pipeline with Prompt EntailmentCode0
Self Supervision for Attention NetworksCode0
On the Cross-lingual Transferability of Monolingual RepresentationsCode0
Language Models with Pre-Trained (GloVe) Word EmbeddingsCode0
News Recommendation with Category Description by a Large Language ModelCode0
On the Stability of a non-hyperbolic nonlinear map with non-bounded set of non-isolated fixed points with applications to Machine LearningCode0
Kneser-Ney Smoothing on Expected CountsCode0
Structural Language Models of CodeCode0
Show:102550
← PrevPage 131 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified