SOTAVerified

Language Model Evaluation

The task of using LLMs as evaluators of large language and vision language models.

Papers

Showing 150 of 69 papers

TitleStatusHype
Unifying the Perspectives of NLP and Software Engineering: A Survey on Language Models for CodeCode4
Evalverse: Unified and Accessible Library for Large Language Model EvaluationCode3
FLASK: Fine-grained Language Model Evaluation based on Alignment Skill SetsCode2
C^2LEVA: Toward Comprehensive and Contamination-Free Language Model EvaluationCode2
AgentSims: An Open-Source Sandbox for Large Language Model EvaluationCode2
BigBIO: A Framework for Data-Centric Biomedical Natural Language ProcessingCode2
Catwalk: A Unified Language Model Evaluation Framework for Many DatasetsCode1
ArabicMMLU: Assessing Massive Multitask Language Understanding in ArabicCode1
Salmon: A Suite for Acoustic Language Model EvaluationCode1
C-STS: Conditional Semantic Textual SimilarityCode1
DART-Eval: A Comprehensive DNA Language Model Evaluation Benchmark on Regulatory DNACode1
Automated Generation of Challenging Multiple-Choice Questions for Vision Language Model EvaluationCode1
M-ABSA: A Multilingual Dataset for Aspect-Based Sentiment AnalysisCode1
MR-GSM8K: A Meta-Reasoning Benchmark for Large Language Model EvaluationCode1
Estimating Contamination via Perplexity: Quantifying Memorisation in Language Model EvaluationCode1
Role-Playing Evaluation for Large Language ModelsCode1
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific ResearchCode1
LatestEval: Addressing Data Contamination in Language Model Evaluation through Dynamic and Time-Sensitive Test ConstructionCode1
Template Matters: Understanding the Role of Instruction Templates in Multimodal Language Model Evaluation and TrainingCode1
ZJUKLAB at SemEval-2021 Task 4: Negative Augmentation with Language Model for Reading Comprehension of Abstract MeaningCode1
Mitigating the Bias of Large Language Model EvaluationCode0
FABLE: A Novel Data-Flow Analysis Benchmark on Procedural Text for Large Language Model EvaluationCode0
Inference-Time Decontamination: Reusing Leaked Benchmarks for Large Language Model EvaluationCode0
Towards Personalized Evaluation of Large Language Models with An Anonymous Crowd-Sourcing PlatformCode0
Enterprise Benchmarks for Large Language Model EvaluationCode0
Fennec: Fine-grained Language Model Evaluation and Correction Extended through Branching and BridgingCode0
Paraphrase and Solve: Exploring and Exploiting the Impact of Surface Form on Mathematical Reasoning in Large Language ModelsCode0
Mind the Gap: Assessing Temporal Generalization in Neural Language ModelsCode0
Environmental large language model Evaluation (ELLE) dataset: A Benchmark for Evaluating Generative AI applications in Eco-environment DomainCode0
PrOnto: Language Model Evaluations for 859 LanguagesCode0
Large Language Model Evaluation via Matrix Nuclear-NormCode0
Pseudointelligence: A Unifying Framework for Language Model Evaluation0
R-Bench: Graduate-level Multi-disciplinary Benchmarks for LLM & MLLM Complex Reasoning Evaluation0
Rethinking Generative Large Language Model Evaluation for Semantic Comprehension0
Setting Standards in Turkish NLP: TR-MMLU for Large Language Model Evaluation0
Stratified Prediction-Powered Inference for Hybrid Language Model Evaluation0
UPME: An Unsupervised Peer Review Framework for Multimodal Large Language Model Evaluation0
ViDAS: Vision-based Danger Assessment and Scoring0
KMMLU: Measuring Massive Multitask Language Understanding in Korean0
Advancing Chinese biomedical text mining with community challenges0
BehaviorBox: Automated Discovery of Fine-Grained Performance Differences Between Language Models0
Benchmarking Harmonized Tariff Schedule Classification Models0
Beyond Metrics: A Critical Analysis of the Variability in Large Language Model Evaluation Frameworks0
BPoMP: The Benchmark of Poetic Minimal Pairs – Limericks, Rhyme, and Narrative Coherence0
Branch-Solve-Merge Improves Large Language Model Evaluation and Generation0
CLiMP: A Benchmark for Chinese Language Model Evaluation0
CoCo-Bench: A Comprehensive Code Benchmark For Multi-task Large Language Model Evaluation0
Confidence in Large Language Model Evaluation: A Bayesian Approach to Limited-Sample Challenges0
Contrastive Entropy: A new evaluation metric for unnormalized language models0
Controlling for Stereotypes in Multimodal Language Model Evaluation0
Show:102550
← PrevPage 1 of 2Next →

No leaderboard results yet.