SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1400114050 of 17610 papers

TitleStatusHype
Implicit Semantic Roles in a Multilingual Setting0
Implicit Sentiment Analysis Based on Chain of Thought Prompting0
Implicit spoken language diarization0
Importance of Self-Attention for Sentiment Analysis0
Importing Phantoms: Measuring LLM Package Hallucination Vulnerabilities0
Improved Algorithms for Differentially Private Language Model Alignment0
Improved Alignment of Modalities in Large Vision Language Models0
Improved Arabic Dialect Classification with Social Media Data0
Improved Chord Recognition by Combining Duration and Harmonic Language Models0
Improved Decipherment of Homophonic Ciphers0
Improved Dependency Parsing using Implicit Word Connections Learned from Unlabeled Data0
Improved Iterative Correction for Distant Spelling Errors0
Improved Language Modeling by Decoding the Past0
Improved Large Language Model Jailbreak Detection via Pretrained Embeddings0
Improved Long-Form Spoken Language Translation with Large Language Models0
Improved low-resource Somali speech recognition by semi-supervised acoustic and language model training0
Improved Multi-Stage Training of Online Attention-based Encoder-Decoder Models0
Improved Neural Language Model Fusion for Streaming Recurrent Neural Network Transducer0
Improved Sentence-Level Arabic Dialect Classification0
改良語句模型技術於節錄式語音摘要之研究 (Improved Sentence Modeling Techniques for Extractive Speech Summarization) [In Chinese]0
Improved Spelling Error Detection and Correction for Arabic0
Improved Twitter Sentiment Analysis Using Naive Bayes and Custom Language Model0
Improved Unbiased Watermark for Large Language Models0
Improved Visual Grounding through Self-Consistent Explanations0
Improved Word Embeddings with Implicit Structure Information0
Improve LLM-based Automatic Essay Scoring with Linguistic Features0
Improvements to Dependency Parsing Using Automatic Simplification of Data0
Improvements to the Bayesian Topic N-Gram Models0
Improvements to the Sequence Memoizer0
Improve Statistical Machine Translation with Context-Sensitive Bilingual Semantic Embedding Model0
Better Pre-Training by Reducing Representation Confusion0
Improving Abstraction in Text Summarization0
Improving accuracy of GPT-3/4 results on biomedical data using a retrieval-augmented language model0
Improving accuracy of rare words for RNN-Transducer through unigram shallow fusion0
Improving Agent Interactions in Virtual Environments with Language Models0
Improving alignment of dialogue agents via targeted human judgements0
Improving astroBERT using Semantic Textual Similarity0
Improving Audio Codec-based Zero-Shot Text-to-Speech Synthesis with Multi-Modal Context and Large Language Model0
Improving Automatic Text Recognition with Language Models in the PyLaia Open-Source Library0
Improving Autoregressive Image Generation through Coarse-to-Fine Token Prediction0
Improving Beam Search by Removing Monotonic Constraint for Neural Machine Translation0
Improving BERT with Hybrid Pooling Network and Drop Mask0
Improving Black-box Speech Recognition using Semantic Parsing0
Improving Block-Wise LLM Quantization by 4-bit Block-Wise Optimal Float (BOF4): Analysis and Variations0
Improving Brain-to-Image Reconstruction via Fine-Grained Text Bridging0
Improving callsign recognition with air-surveillance data in air-traffic communication0
Improving Character-Aware Neural Language Model by Warming up Character Encoder under Skip-gram Architecture0
Improving Chess Commentaries by Combining Language Models with Symbolic Reasoning Engines0
Improving Classification of Infrequent Cognitive Distortions: Domain-Specific Model vs. Data Augmentation0
Improving Code-switched ASR with Linguistic Information0
Show:102550
← PrevPage 281 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified