SOTAVerified

Natural Language Visual Grounding

Papers

Showing 110 of 32 papers

TitleStatusHype
OmniParser for Pure Vision Based GUI AgentCode12
Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any ResolutionCode11
MiniGPT-v2: large language model as a unified interface for vision-language multi-task learningCode7
ShowUI: One Vision-Language-Action Model for GUI Visual AgentCode5
CogAgent: A Visual Language Model for GUI AgentsCode5
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and BeyondCode5
Groma: Localized Visual Tokenization for Grounding Multimodal Large Language ModelsCode4
Aria-UI: Visual Grounding for GUI InstructionsCode3
Aguvis: Unified Pure Vision Agents for Autonomous GUI InteractionCode3
OS-ATLAS: A Foundation Action Model for Generalist GUI AgentsCode3
Show:102550
← PrevPage 1 of 4Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1UGround-V1-7BAccuracy (%)86.34Unverified
2Aguvis-7BAccuracy (%)83Unverified
3OS-Atlas-Base-7BAccuracy (%)82.47Unverified
4Aria-UIAccuracy (%)81.1Unverified
5Aguvis-G-7BAccuracy (%)81Unverified
6UGround-V1-2BAccuracy (%)77.67Unverified
7ShowUIAccuracy (%)75.1Unverified
8ShowUI-GAccuracy (%)75Unverified
9UGroundAccuracy (%)73.3Unverified
10OmniParserAccuracy (%)73Unverified