SOTAVerified

Language Modelling

A language model is a model of natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.

Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.

Source: Wikipedia

Papers

Showing 66516700 of 17610 papers

TitleStatusHype
On Languaging a Simulation Engine0
MoZIP: A Multilingual Benchmark to Evaluate Large Language Models in Intellectual PropertyCode1
LLM Inference Unveiled: Survey and Roofline Model InsightsCode4
Defending LLMs against Jailbreaking Attacks via BacktranslationCode2
Bootstrapping Cognitive Agents with a Large Language Model0
GraphWiz: An Instruction-Following Language Model for Graph ProblemsCode2
Debug like a Human: A Large Language Model Debugger via Verifying Runtime Execution Step-by-stepCode4
ASETF: A Novel Method for Jailbreak Attack on LLMs through Translate Suffix Embeddings0
Efficient Temporal Extrapolation of Multimodal Large Language Models with Temporal Grounding BridgeCode1
PIDformer: Transformer Meets Control Theory0
Training a Bilingual Language Model by Mapping Tokens onto a Shared Character Space0
HiGPT: Heterogeneous Graph Language ModelCode2
NeSy is alive and well: A LLM-driven symbolic approach for better code comment data generation and classificationCode0
Building Flexible Machine Learning Models for Scientific Computing at Scale0
HypoTermQA: Hypothetical Terms Dataset for Benchmarking Hallucination Tendency of LLMsCode0
Text Understanding and Generation Using Transformer Models for Intelligent E-commerce Recommendations0
Say More with Less: Understanding Prompt Learning Behaviors through Gist CompressionCode1
Don't Forget Your Reward Values: Language Model Alignment via Value-based Calibration0
Enhancing Cloud-Based Large Language Model Processing with Elasticsearch and Transformer Models0
Enhanced User Interaction in Operating Systems through Machine Learning Language Models0
ByteComposer: a Human-like Melody Composition Method based on Language Model Agent0
Foot In The Door: Understanding Large Language Model Jailbreaking via Cognitive Psychology0
FGBERT: Function-Driven Pre-trained Gene Language Model for Metagenomics0
CLIPose: Category-Level Object Pose Estimation with Pre-trained Vision-Language Knowledge0
PRP: Propagating Universal Perturbations to Attack Large Language Model Guard-Rails0
Exploring Failure Cases in Multimodal Reasoning About Physical Dynamics0
HD-Eval: Aligning Large Language Model Evaluators Through Hierarchical Criteria Decomposition0
Empowering Large Language Model Agents through Action LearningCode1
MATHWELL: Generating Educational Math Word Problems Using Teacher AnnotationsCode1
TV-SAM: Increasing Zero-Shot Segmentation Performance on Multimodal Medical Images Using GPT-4 Generated Descriptive Prompts Without Human AnnotationCode1
NaVid: Video-based VLM Plans the Next Step for Vision-and-Language Navigation0
Self-Retrieval: End-to-End Information Retrieval with One Large Language ModelCode1
Fine-Grained Self-Endorsement Improves Factuality and Reasoning0
The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation (RAG)Code2
MegaScale: Scaling Large Language Model Training to More Than 10,000 GPUsCode5
Hands-Free VR0
AttributionBench: How Hard is Automatic Attribution Evaluation?Code1
PREDILECT: Preferences Delineated with Zero-Shot Language-based Reasoning in Reinforcement Learning0
Substrate Prediction for RiPP Biosynthetic Enzymes via Masked Language Modeling and Transfer LearningCode0
ArabianGPT: Native Arabic GPT-based Large Language Model0
Repetition Improves Language Model EmbeddingsCode5
Item-side Fairness of Large Language Model-based Recommendation SystemCode0
SIMPLOT: Enhancing Chart Question Answering by Distilling EssentialsCode1
Tokenization counts: the impact of tokenization on arithmetic in frontier LLMsCode0
Mitigating Fine-tuning based Jailbreak Attack with Backdoor Enhanced Safety AlignmentCode1
Optimizing Language Models for Human Preferences is a Causal Inference Problem0
Small Language Models as Effective Guides for Large Language Models in Chinese Relation Extraction0
LLMBind: A Unified Modality-Task Integration FrameworkCode1
Learning to Reduce: Optimal Representations of Structured Data in Prompting Large Language Models0
INSTRUCTIR: A Benchmark for Instruction Following of Information Retrieval ModelsCode1
Show:102550
← PrevPage 134 of 353Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Decay RNNValidation perplexity76.67Unverified
2GRUValidation perplexity53.78Unverified
3LSTMValidation perplexity52.73Unverified
4LSTMTest perplexity48.7Unverified
5Temporal CNNTest perplexity45.2Unverified
6TCNTest perplexity45.19Unverified
7GCNN-8Test perplexity44.9Unverified
8Neural cache model (size = 100)Test perplexity44.8Unverified
9Neural cache model (size = 2,000)Test perplexity40.8Unverified
10GPT-2 SmallTest perplexity37.5Unverified
#ModelMetricClaimedVerifiedStatus
1TCNTest perplexity108.47Unverified
2Seq-U-NetTest perplexity107.95Unverified
3GRU (Bai et al., 2018)Test perplexity92.48Unverified
4R-TransformerTest perplexity84.38Unverified
5Zaremba et al. (2014) - LSTM (medium)Test perplexity82.7Unverified
6Gal & Ghahramani (2016) - Variational LSTM (medium)Test perplexity79.7Unverified
7LSTM (Bai et al., 2018)Test perplexity78.93Unverified
8Zaremba et al. (2014) - LSTM (large)Test perplexity78.4Unverified
9Gal & Ghahramani (2016) - Variational LSTM (large)Test perplexity75.2Unverified
10Inan et al. (2016) - Variational RHNTest perplexity66Unverified
#ModelMetricClaimedVerifiedStatus
1LSTM (7 layers)Bit per Character (BPC)1.67Unverified
2HypernetworksBit per Character (BPC)1.34Unverified
3SHA-LSTM (4 layers, h=1024, no attention head)Bit per Character (BPC)1.33Unverified
4LN HM-LSTMBit per Character (BPC)1.32Unverified
5ByteNetBit per Character (BPC)1.31Unverified
6Recurrent Highway NetworksBit per Character (BPC)1.27Unverified
7Large FS-LSTM-4Bit per Character (BPC)1.25Unverified
8Large mLSTMBit per Character (BPC)1.24Unverified
9AWD-LSTM (3 layers)Bit per Character (BPC)1.23Unverified
10Cluster-Former (#C=512)Bit per Character (BPC)1.22Unverified
#ModelMetricClaimedVerifiedStatus
1Smaller Transformer 126M (pre-trained)Test perplexity33Unverified
2OPT 125MTest perplexity32.26Unverified
3Larger Transformer 771M (pre-trained)Test perplexity28.1Unverified
4OPT 1.3BTest perplexity19.55Unverified
5GPT-Neo 125MTest perplexity17.83Unverified
6OPT 2.7BTest perplexity17.81Unverified
7Smaller Transformer 126M (fine-tuned)Test perplexity12Unverified
8GPT-Neo 1.3BTest perplexity11.46Unverified
9Transformer 125MTest perplexity10.7Unverified
10GPT-Neo 2.7BTest perplexity10.44Unverified