SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1030110350 of 17610 papers

TitleStatusHype
NarrativeBridge: Enhancing Video Captioning with Causal-Temporal Narrative0
Narrow Transformer: StarCoder-Based Java-LM For Desktop0
NAS-BERT: Task-Agnostic and Adaptive-Size BERT Compression with Neural Architecture Search0
NASTEA: Investigating Narrative Schemas through Annotated Entities0
Natural Language Decomposition and Interpretation of Complex Utterances0
Natural Language Descriptions for Human Activities in Video Streams0
Natural Language Generation from Pictographs0
Natural Language Generation through Character-based RNNs with Finite-state Prior Knowledge0
Natural Language Instructions for Intuitive Human Interaction with Robotic Assistants in Field Construction Work0
Natural Language Model Re-usability for Scaling to Different Domains0
Natural language processing for clusterization of genes according to their functions0
Natural Language to Code Generation in Interactive Data Science Notebooks0
Natural Reflection Backdoor Attack on Vision Language Model for Autonomous Driving0
Naver Labs Europe’s Participation in the Robustness, Chat, and Biomedical Tasks at WMT 20200
NAVER Machine Translation System for WAT 20150
NaVid: Video-based VLM Plans the Next Step for Vision-and-Language Navigation0
Navigating the Dual Facets: A Comprehensive Evaluation of Sequential Memory Editing in Large Language Models0
Navigating WebAI: Training Agents to Complete Web Tasks with Large Language Models and Reinforcement Learning0
Navigating with Graph Representations for Fast and Scalable Decoding of Neural Language Models0
Navigation with Large Language Models: Semantic Guesswork as a Heuristic for Planning0
N-best T5: Robust ASR Error Correction using Multiple Input Hypotheses and Constrained Decoding Space0
NC-DRE: Leveraging Non-entity Clue Information for Document-level Relation Extraction0
Nearest Class-Center Simplification through Intermediate Layers0
Nearest Neighbor Language Models for Stylistic Controllable Generation0
Nearest Neighbor Speculative Decoding for LLM Generation and Attribution0
Nebula-I: A General Framework for Collaboratively Training Deep Learning Models on Low-Bandwidth Cloud Clusters0
Needle in the Haystack for Memory Based Large Language Models0
Negation: A Pink Elephant in the Large Language Models' Room?0
Negative-Prompt-driven Alignment for Generative Language Model0
Neighborhood Contrastive Learning for Scientific Document Representations with Citation Embeddings0
Nemotron-4 15B Technical Report0
Nepali Encoder Transformers: An Analysis of Auto Encoding Transformer Language Models for Nepali Text Classification0
NER-BERT: A Pre-trained Model for Low-Resource Entity Tagging0
Healthcare NER Models Using Language Model Pretraining0
NERVE at ROCLING 2022 Shared Task: A Comparison of Three Named Entity Recognition Frameworks Based on Language Model and Lexicon Approach0
Networks of Networks: Complexity Class Principles Applied to Compound AI Systems Design0
Network Visualization of ChatGPT Research: a study based on term and keyword co-occurrence network analysis0
Neural and rule-based Finnish NLP models---expectations, experiments and experiences0
Neural Architecture Search for Natural Language Understanding0
Neural Borrowing Detection with Monolingual Lexical Models0
Neural Composition: Learning to Generate from Multiple Models0
Neural Data-to-Text Generation Based on Small Datasets: Comparing the Added Value of Two Semi-Supervised Learning Approaches on Top of a Large Language Model0
Neural DrugNet0
Neural Embeddings for Text0
Neural-FST Class Language Model for End-to-End Speech Recognition0
Neural Generation for Czech: Data and Baselines0
Neural Grammatical Error Correction with Finite State Transducers0
Neural GRANNy at SemEval-2019 Task 2: A combined approach for better modeling of semantic relationships in semantic frame induction0
Neural-Guided Program Synthesis of Information Extraction Rules Using Self-Supervision0
Neural Headline Generation on Abstract Meaning Representation0
Show:102550
← PrevPage 207 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified