SOTAVerified

Benchmarking

Papers

Showing 21762200 of 5548 papers

TitleStatusHype
ConvCodeWorld: Benchmarking Conversational Code Generation in Reproducible Feedback Environments0
Machine-learning for photoplethysmography analysis: Benchmarking feature, image, and signal-based approachesCode0
MMSciBench: Benchmarking Language Models on Multimodal Scientific Problems0
LimeSoDa: A Dataset Collection for Benchmarking of Machine Learning Regressors in Digital Soil MappingCode0
Is Your Paper Being Reviewed by an LLM? A New Benchmark Dataset and Approach for Detecting AI Text in Peer Review0
Improved YOLOv12 with LLM-Generated Synthetic Data for Enhanced Apple Detection and Benchmarking Against YOLOv11 and YOLOv100
Modelling Regional Solar Photovoltaic Capacity in Great Britain0
Agentic Mixture-of-Workflows for Multi-Modal Chemical Search0
MEBench: Benchmarking Large Language Models for Cross-Document Multi-Entity Question Answering0
MathTutorBench: A Benchmark for Measuring Open-ended Pedagogical Capabilities of LLM Tutors0
Isolating Language-Coding from Problem-Solving: Benchmarking LLMs with PseudoEval0
Safe Multi-Agent Navigation guided by Goal-Conditioned Safe Reinforcement LearningCode0
Science Across Languages: Assessing LLM Multilingual Translation of Scientific Papers0
CayleyPy RL: Pathfinding and Reinforcement Learning on Cayley Graphs0
A Real-time Spatio-Temporal Trajectory Planner for Autonomous Vehicles with Semantic Graph Optimization0
OpenFly: A Comprehensive Platform for Aerial Vision-Language Navigation0
MULTITAT: Benchmarking Multilingual Table-and-Text Question AnsweringCode0
SynthRAD2025 Grand Challenge dataset: generating synthetic CTs for radiotherapy0
Enhancing Image Matting in Real-World Scenes with Mask-Guided Iterative Refinement0
Benchmarking Temporal Reasoning and Alignment Across Chinese DynastiesCode0
Overconfident Oracles: Limitations of In Silico Sequence Design Benchmarking0
On Neural Inertial Classification Networks for Pedestrian Activity Recognition0
An Analyst-Inspector Framework for Evaluating Reproducibility of LLMs in Data ScienceCode0
VidLBEval: Benchmarking and Mitigating Language Bias in Video-Involved LVLMs0
VisFactor: Benchmarking Fundamental Visual Cognition in Multimodal Large Language ModelsCode0
Show:102550
← PrevPage 88 of 222Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GPT-4 TurboACC0.56Unverified