SOTAVerified

Referring expression generation

Generate referring expressions

Papers

Showing 125 of 84 papers

TitleStatusHype
MiniGPT-v2: large language model as a unified interface for vision-language multi-task learningCode7
Mini-Gemini: Mining the Potential of Multi-modality Vision Language ModelsCode7
Improved Baselines with Visual Instruction TuningCode6
Visual Instruction TuningCode6
Efficient Multimodal Learning from Data-centric PerspectiveCode5
LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One DayCode4
MobileVLM : A Fast, Strong and Open Vision Language Assistant for Mobile DevicesCode3
Elysium: Exploring Object-level Perception in Videos via MLLMCode2
GLaMM: Pixel Grounding Large Multimodal ModelCode2
Frontiers in Intelligent ColonoscopyCode2
Uni-Med: A Unified Medical Generalist Foundation Model For Multi-Task Learning Via Connector-MoECode1
Multi-modal Instruction Tuned LLMs with Fine-grained Visual PerceptionCode1
Kosmos-2: Grounding Multimodal Large Language Models to the WorldCode1
Modeling Context in Referring ExpressionsCode1
Resilience through Scene Context in Visual Referring Expression GenerationCode0
Referring Expression Generation Using Entity ProfilesCode0
Pento-DIARef: A Diagnostic Dataset for Learning the Incremental Algorithm for Referring Expression Generation from ExamplesCode0
NeuralREG: An end-to-end approach to referring expression generationCode0
Referring Expression Generation in Visually Grounded Dialogue with Discourse-aware Comprehension GuidingCode0
Enhancing Visual Grounding and Generalization: A Multi-Task Cycle Training Approach for Vision-Language ModelsCode0
Enriching the WebNLG corpusCode0
Enriching the E2E datasetCode0
Collecting Visually-Grounded Dialogue with A Game Of SortsCode0
Improving Quality and Efficiency in Plan-based Neural Data-to-Text GenerationCode0
Grounding Language in Multi-Perspective Referential CommunicationCode0
Show:102550
← PrevPage 1 of 4Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ColonGPT (w/ LoRA, w/o extra data)Accuray99.96Unverified
2LLaVA-v1.5 (w/ LoRA, w/ extra data)Accuray99.32Unverified
3LLaVA-Med-v1.5 (w/ LoRA, w/o extra data)Accuray99.3Unverified
4MGM-2B (w/o LoRA, w/ extra data)Accuray98.75Unverified
5LLaVA-v1.5 (w/ LoRA, w/o extra data)Accuray98.58Unverified
6MGM-2B (w/o LoRA, w/o extra data)Accuray98.17Unverified
7MobileVLM-1.7B (w/ LoRA, w/ extra data)Accuray97.87Unverified
8MobileVLM-1.7B (w/o LoRA, w/ extra data)Accuray97.78Unverified
9LLaVA-Med-v1.0 (w/o LoRA, w/o extra data)Accuray97.74Unverified
10LLaVA-Med-v1.0 (w/o LoRA, w/ extra data)Accuray97.35Unverified
#ModelMetricClaimedVerifiedStatus
1LLaVA-Med-v1.5 (w/ LoRA, w/ extra data)Accuray70Unverified
2LLaVA-v1 (w/ LoRA, w/ extra data)Accuray46.85Unverified