SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 55515600 of 17610 papers

TitleStatusHype
Demonstration of an Adversarial Attack Against a Multimodal Vision Language Model for Pathology ImagingCode0
V-Zen: Efficient GUI Understanding and Precise Grounding With A Novel Multimodal LLMCode0
Walk Extraction Strategies for Node Embeddings with RDF2Vec in Knowledge GraphsCode0
Wanda++: Pruning Large Language Models via Regional GradientsCode0
WatChat: Explaining perplexing programs by debugging mental modelsCode0
Watch What You Just Said: Image Captioning with Text-Conditional AttentionCode0
Watermark under Fire: A Robustness Evaluation of LLM WatermarkingCode0
Scaling Capability in Token Space: An Analysis of Large Vision Language ModelCode0
We are what we repeatedly do: Inducing and deploying habitual schemas in persona-based responsesCode0
Web Page Classification using LLMs for Crawling SupportCode0
We're Calling an Intervention: Exploring Fundamental Hurdles in Adapting Language Models to Nonstandard TextCode0
WET: Overcoming Paraphrasing Vulnerabilities in Embeddings-as-a-Service with Linear Transformation WatermarksCode0
What a neural language model tells us about spatial relationsCode0
What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language modelsCode0
What Does BERT Look At? An Analysis of BERT's AttentionCode0
What Do Llamas Really Think? Revealing Preference Biases in Language Model RepresentationsCode0
What Do Recurrent Neural Network Grammars Learn About Syntax?Code0
What makes a language easy to deep-learn? Deep neural networks and humans similarly benefit from compositional structureCode0
What Makes Pre-trained Language Models Better Zero-shot Learners?Code0
What's in a Name? Evaluating Assembly-Part Semantic Knowledge in Language Models through User-Provided Names in CAD FilesCode0
What's the Difference? Supporting Users in Identifying the Effects of Prompt and Model Changes Through Token PatternsCode0
When Babies Teach Babies: Can student knowledge sharing outperform Teacher-Guided Distillation on small datasets?Code0
When Being Unseen from mBERT is just the Beginning: Handling New Languages With Multilingual Language ModelsCode0
When Benchmarks are Targets: Revealing the Sensitivity of Large Language Model LeaderboardsCode0
When can I Speak? Predicting initiation points for spoken dialogue agentsCode0
When Dimensionality Hurts: The Role of LLM Embedding Compression for Noisy Regression TasksCode0
When Does Classical Chinese Help? Quantifying Cross-Lingual Transfer in Hanja and KanbunCode0
When Does Syntax Mediate Neural Language Model Performance? Evidence from Dropout ProbesCode0
When FastText Pays Attention: Efficient Estimation of Word Representations using Constrained Positional WeightingCode0
When in Doubt, Ask: Generating Answerable and Unanswerable Questions, UnsupervisedCode0
When Is Multilinguality a Curse? Language Modeling for 250 High- and Low-Resource LanguagesCode0
When Quantization Affects Confidence of Large Language Models?Code0
When the Music Stops: Tip-of-the-Tongue Retrieval for MusicCode0
Where to put the Image in an Image Caption GeneratorCode0
Whodunit? Learning to Contrast for Authorship AttributionCode0
Who is GPT-3? An Exploration of Personality, Values and DemographicsCode0
Who’s on First?: Probing the Learning and Representation Capabilities of Language Models on Deterministic Closed DomainsCode0
Chain-of-Factors Paper-Reviewer MatchingCode0
WIBA: What Is Being Argued? A Comprehensive Approach to Argument MiningCode0
Winner-Take-All Column Row Sampling for Memory Efficient Adaptation of Language ModelCode0
Women Are Beautiful, Men Are Leaders: Gender Stereotypes in Machine Translation and Language ModelingCode0
word2vec Explained: deriving Mikolov et al.'s negative-sampling word-embedding methodCode0
Word Ordering Without SyntaxCode0
Word sense extensionCode0
Would I Lie To You? Inference Time Alignment of Language Models using Direct Preference HeadsCode0
Written Term Detection Improves Spoken Term DetectionCode0
XAMPLER: Learning to Retrieve Cross-Lingual In-Context ExamplesCode0
XCompress: LLM assisted Python-based text compression toolkitCode0
XEdgeAI: A Human-centered Industrial Inspection Framework with Data-centric Explainable Edge AI ApproachCode0
XFEVER: Exploring Fact Verification across LanguagesCode0
Show:102550
← PrevPage 112 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified