SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 1015110200 of 17610 papers

TitleStatusHype
ChatGPT-4 Outperforms Experts and Crowd Workers in Annotating Political Twitter Messages with Zero-Shot Learning0
Automatic Semantic Augmentation of Language Model Prompts (for Code Summarization)0
[CLS] Token is All You Need for Zero-Shot Semantic Segmentation0
Graph2topic: an opensource topic modeling framework based on sentence embedding and community detection0
A-CAP: Anticipation Captioning with Commonsense Knowledge0
Using Large Language Models for (De-)Formalization and Natural Argumentation Exercises for Beginner's Students0
Semantic Feature Verification in FLAN-T50
Measuring Gender Bias in West Slavic Language Models0
Boosted Prompt Ensembles for Large Language ModelsCode1
A Closer Look at the Explainability of Contrastive Language-Image Pre-trainingCode1
Galactic ChitChat: Using Large Language Models to Converse with Astronomy Literature0
Training Large Language Models Efficiently with Sparsity and Dataflow0
RRHF: Rank Responses to Align Language Models with Human Feedback without tearsCode2
r-softmax: Generalized Softmax with Controllable Sparsity RateCode0
Prompt Learning for News RecommendationCode1
Teaching Large Language Models to Self-DebugCode0
Graph-ToolFormer: To Empower LLMs with Graph Reasoning Ability via Prompt Augmented by ChatGPTCode2
Automated Reading Passage Generation with OpenAI's Large Language Model0
A Cheaper and Better Diffusion Language Model with Soft-Masked NoiseCode1
DetCLIPv2: Scalable Open-Vocabulary Object Detection Pre-training via Word-Region Alignment0
Inference with Reference: Lossless Acceleration of Large Language ModelsCode1
Graph-ToolFormer: To Empower LLMs with Graph Reasoning Ability via Prompt Dataset Augmented by ChatGPTCode2
Interaction-Aware Prompting for Zero-Shot Spatio-Temporal Action DetectionCode1
SELFormer: Molecular Representation Learning via SELFIES Language ModelsCode1
Similarity-Aware Multimodal Prompt Learning for Fake News Detection0
CrowdCLIP: Unsupervised Crowd Counting via Vision-Language ModelCode1
GPT4Rec: A Generative Framework for Personalized Recommendation and User Interests Interpretation0
Decoder-Only or Encoder-Decoder? Interpreting Language Model as a Regularized Encoder-Decoder0
Why think step by step? Reasoning emerges from the locality of experienceCode1
TemPL: A Novel Deep Learning Model for Zero-Shot Prediction of Protein Stability and Activity Based on Temperature-Guided Language Modeling0
Language-aware Multiple Datasets Detection Pretraining for DETRs0
Generative Agents: Interactive Simulacra of Human BehaviorCode6
From Retrieval to Generation: Efficient and Effective Entity Set Expansion0
Making AI Less "Thirsty": Uncovering and Addressing the Secret Water Footprint of AI ModelsCode1
Large Language Models as Master Key: Unlocking the Secrets of Materials Science with GPT0
Revolutionizing Single Cell Analysis: The Power of Large Language Models for Cell Type Annotation0
Towards Self-Explainability of Deep Neural Networks with Heatmap Captioning and Large-Language Models0
What's in a Name? Beyond Class Indices for Image Recognition0
Bengali Fake Review Detection using Semi-supervised Generative Adversarial Networks0
ChartReader: A Unified Framework for Chart Derendering and Comprehension without Heuristic RulesCode1
Efficient OCR for Building a Diverse Digital HistoryCode1
Scalable and Accurate Self-supervised Multimodal Representation Learning without Aligned Video and Text Data0
Synthesize High-dimensional Longitudinal Electronic Health Records via Hierarchical Autoregressive Language ModelCode1
Locate Then Generate: Bridging Vision and Language with Bounding Box for Scene-Text VQA0
Mastering Symbolic Operations: Augmenting Language Models with Compiled Neural NetworksCode1
Is ChatGPT a Highly Fluent Grammatical Error Correction System? A Comprehensive Evaluation0
LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language ModelsCode3
Dialogue-Contextualized Re-ranking for Medical History-Taking0
Using Language Models For Knowledge Acquisition in Natural Language Reasoning Problems0
Unsupervised Improvement of Factual Knowledge in Language ModelsCode0
Show:102550
← PrevPage 204 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified