SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 78017850 of 17610 papers

TitleStatusHype
How good are Large Language Models on African Languages?0
How Good is ChatGPT at Audiovisual Deepfake Detection: A Comparative Study of ChatGPT, AI Models and Human Perception0
How Green are Neural Language Models? Analyzing Energy Consumption in Text Summarization Fine-tuning0
How Length Prediction Influence the Performance of Non-Autoregressive Translation?0
How LSTM Encodes Syntax: Exploring Context Vectors and Semi-Quantization on Natural Text0
How Many Languages Can a Language Model Model?0
How Many Parameters Does it Take to Change a Light Bulb? Evaluating Performance in Self-Play of Conversational Games as a Function of Model Characteristics0
How much do language models memorize?0
How predictable is language model benchmark performance?0
How Self-Attention Improves Rare Class Performance in a Question-Answering Dialogue Agent0
How Teachers Can Use Large Language Models and Bloom's Taxonomy to Create Educational Quizzes0
How to Adapt Your Large-Scale Vision-and-Language Model0
How to Avoid Sentences Spelling Boring? Towards a Neural Approach to Unsupervised Metaphor Generation0
How to Avoid Unwanted Pregnancies: Domain Adaptation using Neural Network Models0
How to Bridge the Gap between Modalities: Survey on Multimodal Large Language Model0
How to Build an AI Tutor That Can Adapt to Any Course Using Knowledge Graph-Enhanced Retrieval-Augmented Generation (KG-RAG)0
How to Construct Deep Recurrent Neural Networks0
How to Motivate Your Dragon: Teaching Goal-Driven Agents to Speak and Act in Fantasy Worlds0
How to Prune Your Language Model: Recovering Accuracy on the "Sparsity May Cry'' Benchmark0
How to represent a word and predict it, too: Improving tied architectures for language modelling0
How User Language Affects Conflict Fatality Estimates in ChatGPT0
How Well Can Vision Language Models See Image Details?0
How Well Do Deep Learning Models Capture Human Concepts? The Case of the Typicality Effect0
How will Language Modelers like ChatGPT Affect Occupations and Industries?0
HPC-Coder-V2: Studying Code LLMs Across Low-Resource Parallel Languages0
HPC-GPT: Integrating Large Language Model for High-Performance Computing0
HPE-CogVLM: Advancing Vision Language Models with a Head Pose Grounding Task0
HPS: Hard Preference Sampling for Human Preference Alignment0
HRLAIF: Improvements in Helpfulness and Harmlessness in Open-domain Reinforcement Learning From AI Feedback0
HSC-GPT: A Large Language Model for Human Settlements Construction0
HSI-GPT: A General-Purpose Large Scene-Motion-Language Model for Human Scene Interaction0
HSSA tree structures for BTG-based preordering in machine translation0
HTLM: Hyper-Text Pre-Training and Prompting of Language Models0
HuaAMS at SemEval-2022 Task 8: Combining Translation and Domain Pre-training for Cross-lingual News Article Similarity0
Human Adversarial QA: Did the Model Understand the Paragraph?0
HumanAesExpert: Advancing a Multi-Modality Foundation Model for Human Image Aesthetic Assessment0
Human-centric Dialog Training via Offline Reinforcement Learning0
Human-Centric NLP or AI-Centric Illusion?: A Critical Investigation0
Human Evaluation of Procedural Knowledge Graph Extraction from Text with Large Language Models0
Human Implicit Preference-Based Policy Fine-tuning for Multi-Agent Reinforcement Learning in USV Swarm0
Human Instruction-Following with Deep Reinforcement Learning via Transfer-Learning from Text0
Humanity's Last Exam0
Human Language Modeling0
Human Latency Conversational Turns for Spoken Avatar Systems0
Human-like Natural Language Generation Using Monte Carlo Tree Search0
Human Mobility Modeling with Limited Information via Large Language Models0
Human-Object Interaction from Human-Level Instructions0
Human-Object Interaction with Vision-Language Model Guided Relative Movement Dynamics0
HumanOmni: A Large Vision-Speech Language Model for Human-Centric Video Understanding0
Human or Not? A Gamified Approach to the Turing Test0
Show:102550
← PrevPage 157 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified