SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1025110300 of 17610 papers

TitleStatusHype
Self-Influence Guided Data Reweighting for Language Model Pre-training0
Predicting Question-Answering Performance of Large Language Models through Semantic Consistency0
Recommendations by Concise User Profiles from Review Text0
FlashDecoding++: Faster Large Language Model Inference on GPUs0
Expressive TTS Driven by Natural Language Prompts Using Few Human Annotations0
Continual Learning Under Language Shift0
Comparing Optimization Targets for Contrast-Consistent SearchCode0
Efficient Human-AI Coordination via Preparatory Language-based Convention0
Attention Alignment and Flexible Positional Embeddings Improve Transformer Length Extrapolation0
Improving Interpersonal Communication by Simulating Audiences with Language ModelsCode0
CLIP-AD: A Language-Guided Staged Dual-Path Model for Zero-shot Anomaly Detection0
Form follows Function: Text-to-Text Conditional Graph Generation based on Functional Requirements0
An Improved Transformer-based Model for Detecting Phishing, Spam, and Ham: A Large Language Model Approach0
Unleashing the Creative Mind: Language Model As Hierarchical Policy For Improved Exploration on Challenging Problem SolvingCode0
ZEETAD: Adapting Pretrained Vision-Language Model for Zero-Shot End-to-End Temporal Action Detection0
Mukh-Oboyob: Stable Diffusion and BanglaBERT enhanced Bangla Text-to-Face SynthesisCode0
Modeling subjectivity (by Mimicking Annotator Annotation) in toxic comment identification across diverse communities0
Text Rendering Strategies for Pixel Language Models0
Language Model Training Paradigms for Clinical Feature EmbeddingsCode0
Longer Fixations, More Computation: Gaze-Guided Recurrent Neural Networks0
Interactive Multi-fidelity Learning for Cost-effective Adaptation of Language Model with Sparse Human Supervision0
Increasing The Performance of Cognitively Inspired Data-Efficient Language Models via Implicit Structure BuildingCode0
Filter bubbles and affective polarization in user-personalized large language model outputs0
Enhancing the Spatial Awareness Capability of Multi-Modal Large Language Model0
FA Team at the NTCIR-17 UFO Task0
BERTwich: Extending BERT's Capabilities to Model Dialectal and Noisy Text0
A Multi-Modal Foundation Model to Assist People with Blindness and Low Vision in Environmental Interaction0
The Impact of Depth on Compositional Generalization in Transformer Language Models0
Remember what you did so you know what to do next0
ROME: Evaluating Pre-trained Vision-Language Models on Reasoning beyond Visual Common SenseCode0
Musical Form Generation0
MoCa: Measuring Human-Language Model Alignment on Causal and Moral Judgment Tasks0
Leveraging Language Models to Detect Greenwashing0
'Person' == Light-skinned, Western Man, and Sexualization of Women of Color: Stereotypes in Stable Diffusion0
Integrating Summarization and Retrieval for Enhanced Personalization via Large Language Models0
EHRTutor: Enhancing Patient Understanding of Discharge Instructions0
Improving Input-label Mapping with Demonstration Replay for In-context Learning0
Emotional Theory of Mind: Bridging Fast Visual Processing with Slow Linguistic Reasoning0
BTRec: BERT-Based Trajectory Recommendation for Personalized ToursCode0
Generative retrieval-augmented ontologic graph and multi-agent strategies for interpretive large language model-based materials design0
Adapter Pruning using Tropical Characterization0
Interpretable-by-Design Text Understanding with Iteratively Generated Concept BottleneckCode0
Generating Medical Prescriptions with Conditional TransformerCode0
Integrating Pre-trained Language Model into Neural Machine Translation0
BioInstruct: Instruction Tuning of Large Language Models for Biomedical Natural Language Processing0
A Unique Training Strategy to Enhance Language Models Capabilities for Health Mention Detection from Social Media Content0
Counterfactually Probing Language Identity in Multilingual ModelsCode0
Unified Representation for Non-compositional and Compositional ExpressionsCode0
TeacherLM: Teaching to Fish Rather Than Giving the Fish, Language Modeling Likewise0
PACuna: Automated Fine-Tuning of Language Models for Particle AcceleratorsCode0
Show:102550
← PrevPage 206 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified