SOTAVerified

Natural Language Visual Grounding

Papers

Showing 1120 of 32 papers

TitleStatusHype
SeeClick: Harnessing GUI Grounding for Advanced Visual GUI AgentsCode3
CogAgent: A Visual Language Model for GUI AgentsCode5
MiniGPT-v2: large language model as a unified interface for vision-language multi-task learningCode7
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and BeyondCode5
Localizing Moments in Long Video Via Multimodal GuidanceCode1
Visual Writing Prompts: Character-Grounded Story Generation with Curated Image Sequences0
Belief Revision based Caption Re-ranker with Visual Semantic InformationCode1
TubeDETR: Spatio-Temporal Video Grounding with TransformersCode1
CALVIN: A Benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation TasksCode1
Panoptic Narrative GroundingCode1
Show:102550
← PrevPage 2 of 4Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1UGround-V1-7BAccuracy (%)86.34Unverified
2Aguvis-7BAccuracy (%)83Unverified
3OS-Atlas-Base-7BAccuracy (%)82.47Unverified
4Aria-UIAccuracy (%)81.1Unverified
5Aguvis-G-7BAccuracy (%)81Unverified
6UGround-V1-2BAccuracy (%)77.67Unverified
7ShowUIAccuracy (%)75.1Unverified
8ShowUI-GAccuracy (%)75Unverified
9UGroundAccuracy (%)73.3Unverified
10OmniParserAccuracy (%)73Unverified