SOTAVerified

Referring expression generation

Generate referring expressions

Papers

Showing 125 of 84 papers

TitleStatusHype
Mini-Gemini: Mining the Potential of Multi-modality Vision Language ModelsCode7
MiniGPT-v2: large language model as a unified interface for vision-language multi-task learningCode7
Visual Instruction TuningCode6
Improved Baselines with Visual Instruction TuningCode6
Efficient Multimodal Learning from Data-centric PerspectiveCode5
LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One DayCode4
MobileVLM : A Fast, Strong and Open Vision Language Assistant for Mobile DevicesCode3
Elysium: Exploring Object-level Perception in Videos via MLLMCode2
Frontiers in Intelligent ColonoscopyCode2
GLaMM: Pixel Grounding Large Multimodal ModelCode2
Multi-modal Instruction Tuned LLMs with Fine-grained Visual PerceptionCode1
Modeling Context in Referring ExpressionsCode1
Kosmos-2: Grounding Multimodal Large Language Models to the WorldCode1
Uni-Med: A Unified Medical Generalist Foundation Model For Multi-Task Learning Via Connector-MoECode1
Decoding Strategies for Neural Referring Expression Generation0
Creating Training Corpora for NLG Micro-Planners0
Assessing Neural Referential Form Selectors on a Realistic Multilingual Dataset0
CoNAN: A Complementary Neighboring-based Attention Network for Referring Expression Generation0
Comprehension-guided referring expressions0
A Predictive Model for Notional Anaphora in English0
Adapting Descriptions of People to the Point of View of a Moving Observer0
Exploring the Behavior of Classic REG Algorithms in the Description of Characters in 3D Images0
Combining Referring Expression Generation and Surface Realization: A Corpus-Based Investigation of Architectures0
Fuzzy Logic for Vagueness Management in Referring Expression Generation0
Generating Quantified Referring Expressions through Attention-Driven Incremental Perception0
Show:102550
← PrevPage 1 of 4Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ColonGPT (w/ LoRA, w/o extra data)Accuray99.96Unverified
2LLaVA-v1.5 (w/ LoRA, w/ extra data)Accuray99.32Unverified
3LLaVA-Med-v1.5 (w/ LoRA, w/o extra data)Accuray99.3Unverified
4MGM-2B (w/o LoRA, w/ extra data)Accuray98.75Unverified
5LLaVA-v1.5 (w/ LoRA, w/o extra data)Accuray98.58Unverified
6MGM-2B (w/o LoRA, w/o extra data)Accuray98.17Unverified
7MobileVLM-1.7B (w/ LoRA, w/ extra data)Accuray97.87Unverified
8MobileVLM-1.7B (w/o LoRA, w/ extra data)Accuray97.78Unverified
9LLaVA-Med-v1.0 (w/o LoRA, w/o extra data)Accuray97.74Unverified
10LLaVA-Med-v1.0 (w/o LoRA, w/ extra data)Accuray97.35Unverified
#ModelMetricClaimedVerifiedStatus
1LLaVA-Med-v1.5 (w/ LoRA, w/ extra data)Accuray70Unverified
2LLaVA-v1 (w/ LoRA, w/ extra data)Accuray46.85Unverified