SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 96519700 of 17610 papers

TitleStatusHype
WordArt Designer: User-Driven Artistic Typography Synthesis using Large Language Models0
XL3M: A Training-free Framework for LLM Length Extension Based on Segment-wise Inference0
WeLM: A Well-Read Pre-trained Language Model for Chinese0
Unsupervised Distractor Generation via Large Language Model Distilling and Counterfactual Contrastive Decoding0
WeNet: Weighted Networks for Recurrent Network Architecture Search0
WenyanGPT: A Large Language Model for Classical Chinese Tasks0
WEPO: Web Element Preference Optimization for LLM-based Web Navigation0
Unlocking Historical Clinical Trial Data with ALIGN: A Compositional Large Language Model System for Medical Coding0
WESSA at SemEval-2020 Task 9: Code-Mixed Sentiment Analysis using Transformers0
West-of-N: Synthetic Preferences for Self-Improving Reward Models0
Unifying Multitrack Music Arrangement via Reconstruction Fine-Tuning and Efficient Tokenization0
WFST-Based Grapheme-to-Phoneme Conversion: Open Source tools for Alignment, Model-Building and Decoding0
Unlocking Spatial Comprehension in Text-to-Image Diffusion Models0
What are human values, and how do we align AI to them?0
What are Models Thinking about? Understanding Large Language Model Hallucinations "Psychology" through Model Inner State Analysis0
What are the limitations on the flux of syntactic dependencies? Evidence from UD treebanks0
What are the limits of cross-lingual dense passage retrieval for low-resource languages?0
What Are Tools Anyway? A Survey from the Language Model Perspective0
What A Situated Language-Using Agent Must be Able to Do: A Top-Down Analysis0
What BERT Based Language Model Learns in Spoken Transcripts: An Empirical Study0
Word-based Domain Adaptation for Neural Machine Translation0
What Can a Generative Language Model Answer About a Passage?0
What can we gain from language models for morphological inflection?0
What Changed? Detecting and Evaluating Instruction-Guided Image Edits with Multimodal Large Language Models0
Word Class Based Language Modeling: A Case of Upper Sorbian0
What does BERT Learn from Arabic Machine Reading Comprehension Datasets?0
Unlocking the Potential of Large Language Models in the Nuclear Industry with Synthetic Data0
What Does BERT with Vision Look At?0
What Does it Mean for a Language Model to Preserve Privacy?0
What do Language Model Probabilities Represent? From Distribution Estimation to Response Prediction0
What do Language Representations Really Represent?0
Unlocking the Potential of Model Merging for Low-Resource Languages0
What do LLMs Know about Financial Markets? A Case Study on Reddit Market Sentiment Analysis0
Word Embeddings based on Fixed-Size Ordinally Forgetting Encoding0
XHate-999: Analyzing and Detecting Abusive Language Across Domains and Languages0
What do RNN Language Models Learn about Filler--Gap Dependencies?0
Understanding Language Model Circuits through Knowledge Editing0
Word Embeddings Revisited: Do LLMs Offer Something New?0
What do we need to know about an unknown word when parsing German0
Worldwide Federated Training of Language Models0
What goes into a word: generating image descriptions with top-down spatial knowledge0
What Happens When Small Is Made Smaller? Exploring the Impact of Compression on Small Data Pretrained Language Models0
WHAT-IF: Exploring Branching Narratives by Meta-Prompting Large Language Models0
What is not where: the challenge of integrating spatial representations into deep learning architectures0
Word-Free Spoken Language Understanding for Mandarin-Chinese0
XGV-BERT: Leveraging Contextualized Language Model and Graph Neural Network for Efficient Software Vulnerability Detection0
What Kind of Language Is Hard to Language-Model?0
What Kinds of Tokens Benefit from Distant Text? An Analysis on Long Context Language Modeling0
Word Importance Explains How Prompts Affect Language Model Outputs0
Unsupervised Discovery of Unaccusative and Unergative Verbs0
Show:102550
← PrevPage 194 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified