SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 85018550 of 17610 papers

TitleStatusHype
Understanding prompt engineering may not require rethinking generalization0
Understanding Recurrent Neural Architectures by Analyzing and Synthesizing Long Distance Dependencies in Benchmark Sequential Datasets0
Evaluating Contextual Embeddings and their Extraction Layers for Depression Assessment0
Understanding Semantics from Speech Through Pre-training0
Understanding the Behaviour of Neural Abstractive Summarizers using Contrastive Examples0
Understanding the Capabilities, Limitations, and Societal Impact of Large Language Models0
Understanding the Dataset Practitioners Behind Large Language Model Development0
Understanding the Impact of Long-Term Memory on Self-Disclosure with Large Language Model-Driven Chatbots for Public Health Intervention0
Understanding the Inner Workings of Language Models Through Representation Dissimilarity0
Understanding the Interplay of Scale, Data, and Bias in Language Models: A Case Study with BERT0
Understanding the Logical and Semantic Structure of Large Documents0
Understanding the Multi-modal Prompts of the Pre-trained Vision-Language Model0
Understanding the Natural Language of DNA using Encoder-Decoder Foundation Models with Byte-level Precision0
Understanding the performance gap between online and offline alignment algorithms0
Understanding Sarcoidosis Using Large Language Models and Social Media Data0
Understanding the role of FFNs in driving multilingual behaviour in LLMs0
Understanding the Uncertainty of LLM Explanations: A Perspective Based on Reasoning Topology0
Understanding Token Probability Encoding in Output Embeddings0
Understanding Your Agent: Leveraging Large Language Models for Behavior Explanation0
Understanding Zero-shot Rare Word Recognition Improvements Through LLM Integration0
Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information0
Iterative Shallow Fusion of Backward Language Model for End-to-End Speech Recognition0
Iterative Translation Refinement with Large Language Models0
Iterative Value Function Optimization for Guided Decoding0
"I think this is the most disruptive technology": Exploring Sentiments of ChatGPT Early Adopters using Twitter Data0
It's About Time: Incorporating Temporality in Retrieval Augmented Language Models0
"It's a Fair Game", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents0
It's All Connected: A Journey Through Test-Time Memorization, Attentional Bias, Retention, and Online Optimization0
It’s Basically the Same Language Anyway: the Case for a Nordic Language Model0
It's High Time: A Survey of Temporal Information Retrieval and Question Answering0
It's Morphing Time: Unleashing the Potential of Multiple LLMs via Multi-objective Optimization0
I-Tuning: Tuning Frozen Language Models with Image for Lightweight Image Captioning0
IUCL: Combining Information Sources for SemEval Task 50
IVLMap: Instance-Aware Visual Language Grounding for Consumer Robot Navigation0
I-WAS: a Data Augmentation Method with GPT-2 for Simile Detection0
IXA Biomedical Translation System at WMT16 Biomedical Translation Task0
JABER and SABER: Junior and Senior Arabic BERt0
Jack of All Tasks, Master of Many: Designing General-purpose Coarse-to-Fine Vision-Language Model0
Jack of All Tasks Master of Many: Designing General-Purpose Coarse-to-Fine Vision-Language Model0
JaFIn: Japanese Financial Instruction Dataset0
Jailbreak and Guard Aligned Language Models with Only Few In-Context Demonstrations0
Jailbreaking Safeguarded Text-to-Image Models via Large Language Models0
JAKET: Joint Pre-training of Knowledge Graph and Language Understanding0
Jal Anveshak: Prediction of fishing zones using fine-tuned LlaMa 20
Jamo Pair Encoding: Subcharacter Representation-based Extreme Korean Vocabulary Compression for Efficient Subword Tokenization0
Japanese Lexical Simplification for Non-Native Speakers0
Japanese Realistic Textual Entailment Corpus0
Japanese to English Machine Translation using Preordering and Compositional Distributed Semantics0
Japanese Zero Anaphora Resolution Can Benefit from Parallel Texts Through Neural Transfer Learning0
JBNU-CCLab at SemEval-2022 Task 7: DeBERTa for Identifying Plausible Clarifications in Instructional Texts0
Show:102550
← PrevPage 171 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified