SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 41514200 of 17610 papers

TitleStatusHype
ChestyBot: Detecting and Disrupting Chinese Communist Party Influence Stratagems0
CHILLAX - at Arabic Hate Speech 2022: A Hybrid Machine Learning and Transformers based Model to Detect Arabic Offensive and Hate Speech0
ChimpVLM: Ethogram-Enhanced Chimpanzee Behaviour Recognition0
Chinese Couplet Generation with Neural Network Structures0
Chinese Grammatical Error Diagnosis System Based on Hybrid Model0
Chinese Grammatical Error Diagnosis Using Single Word Embedding0
Chinese Grammatical Error Diagnosis with Graph Convolution Network and Multi-task Learning0
Chinese Grammatical Error Diagnosis with Long Short-Term Memory Networks0
Chinese Long and Short Form Choice Exploiting Neural Network Language Modeling Approaches0
Chinese Metaphor Recognition Using a Multi-stage Prompting Large Language Model0
Chinese Preposition Selection for Grammatical Error Diagnosis0
Chinese Sequence Labeling with Semi-Supervised Boundary-Aware Language Model Pre-training0
Chinese Song Iambics Generation with Neural Attention-based Model0
Chinese Spelling Check based on N-gram and String Matching Algorithm0
Chinese Spelling Checker Based on Statistical Machine Translation0
機器翻譯為本的中文拼字改錯系統 (Chinese Spelling Checker Based on Statistical Machine Translation)0
Chinese Spelling Check Evaluation at SIGHAN Bake-off 20130
Chinese Spelling Check System Based on N-gram Model0
Chinese Spelling Error Detection and Correction Based on Language Model, Pronunciation, and Shape0
Chinese Tiny LLM: Pretraining a Chinese-Centric Large Language Model0
Chinese Word Ordering Errors Detection and Correction for Non-Native Chinese Language Learners0
Chinese Word Segmentation with Heterogeneous Graph Neural Network0
Chinese WPLC: A Chinese Dataset for Evaluating Pretrained Language Models on Word Prediction Given Long-Range Context0
CHISPA on the GO: A mobile Chinese-Spanish translation service for travellers in trouble0
Chitrarth: Bridging Vision and Language for a Billion People0
Chittron: An Automatic Bangla Image Captioning System0
Choose the Final Translation from NMT and LLM hypotheses Using MBR Decoding: HW-TSC's Submission to the WMT24 General MT Shared Task0
Choose Your Programming Copilot: A Comparison of the Program Synthesis Performance of GitHub Copilot and Genetic Programming0
ChronoFact: Timeline-based Temporal Fact Verification0
ChronoLLM: A Framework for Customizing Large Language Model for Digital Twins generalization based on PyChrono0
Chronologically Consistent Large Language Models0
ChronoSteer: Bridging Large Language Model and Time Series Foundation Model via Synthetic Data0
Chunk-Distilled Language Modeling0
Churn Identification in Microblogs using Convolutional Neural Networks with Structured Logical Knowledge0
ChuXin: 1.6B Technical Report0
CI-Bench: Benchmarking Contextual Integrity of AI Assistants on Synthetic Data0
CIEMPIESS: A New Open-Sourced Mexican Spanish Radio Corpus0
CIF-PT: Bridging Speech and Text Representations for Spoken Language Understanding via Continuous Integrate-and-Fire Pre-Training0
CigTime: Corrective Instruction Generation Through Inverse Motion Editing0
CIKMar: A Dual-Encoder Approach to Prompt-Based Reranking in Educational Dialogue Systems0
CINO: A Chinese Minority Pre-trained Language Model0
CINO: A Chinese Minority Pre-trained Language Model0
CIRCLE: Multi-Turn Query Clarifications with Reinforcement Learning0
Circles are like Ellipses, or Ellipses are like Circles? Measuring the Degree of Asymmetry of Static and Contextual Embeddings and the Implications to Representation Learning0
Circling Back to Recurrent Models of Language0
Circuit Compositions: Exploring Modular Structures in Transformer-Based Language Models0
CityCraft: A Real Crafter for 3D City Generation0
CityGPT: Towards Urban IoT Learning, Analysis and Interaction with Multi-Agent System0
City-LEO: Toward Transparent City Management Using LLM with End-to-End Optimization0
CityLoc: 6DoF Pose Distributional Localization for Text Descriptions in Large-Scale Scenes with Gaussian Representation0
Show:102550
← PrevPage 84 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified