SOTAVerified

Language Model Evaluation

The task of using LLMs as evaluators of large language and vision language models.

Papers

Showing 2650 of 69 papers

TitleStatusHype
Confidence in Large Language Model Evaluation: A Bayesian Approach to Limited-Sample Challenges0
CoCo-Bench: A Comprehensive Code Benchmark For Multi-task Large Language Model Evaluation0
UPME: An Unsupervised Peer Review Framework for Multimodal Large Language Model Evaluation0
MMLU-ProX: A Multilingual Benchmark for Advanced Large Language Model Evaluation0
Predicting Liquidity-Aware Bond Yields using Causal GANs and Deep Reinforcement Learning with LLM Evaluation0
Environmental large language model Evaluation (ELLE) dataset: A Benchmark for Evaluating Generative AI applications in Eco-environment DomainCode0
Setting Standards in Turkish NLP: TR-MMLU for Large Language Model Evaluation0
LMUnit: Fine-grained Evaluation with Natural Language Unit Tests0
Benchmarking Harmonized Tariff Schedule Classification Models0
Large Language Model Evaluation via Matrix Nuclear-NormCode0
Enterprise Benchmarks for Large Language Model EvaluationCode0
ViDAS: Vision-based Danger Assessment and Scoring0
Mitigating the Bias of Large Language Model EvaluationCode0
Beyond Metrics: A Critical Analysis of the Variability in Large Language Model Evaluation Frameworks0
On Speeding Up Language Model Evaluation0
Inference-Time Decontamination: Reusing Leaked Benchmarks for Large Language Model EvaluationCode0
Stratified Prediction-Powered Inference for Hybrid Language Model Evaluation0
DnA-Eval: Enhancing Large Language Model Evaluation through Decomposition and Aggregation0
iREPO: implicit Reward Pairwise Difference based Empirical Preference Optimization0
Lessons from the Trenches on Reproducible Evaluation of Language Models0
Fennec: Fine-grained Language Model Evaluation and Correction Extended through Branching and BridgingCode0
Generalization Measures for Zero-Shot Cross-Lingual Transfer0
Paraphrase and Solve: Exploring and Exploiting the Impact of Surface Form on Mathematical Reasoning in Large Language ModelsCode0
Towards Personalized Evaluation of Large Language Models with An Anonymous Crowd-Sourcing PlatformCode0
Rethinking Generative Large Language Model Evaluation for Semantic Comprehension0
Show:102550
← PrevPage 2 of 3Next →

No leaderboard results yet.