SOTAVerified

Dense Captioning

Papers

Showing 110 of 69 papers

TitleStatusHype
PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense CaptioningCode4
3D-LLM: Injecting the 3D World into Large Language ModelsCode3
LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding Reasoning and PlanningCode3
TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video UnderstandingCode2
3D-VisTA: Pre-trained Transformer for 3D Vision and Text AlignmentCode2
LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and PlanningCode2
GRiT: A Generative Region-to-text Transformer for Object UnderstandingCode2
Grounded 3D-LLM with Referent TokensCode2
ControlCap: Controllable Region-level CaptioningCode2
TOD3Cap: Towards 3D Dense Captioning in Outdoor ScenesCode2
Show:102550
← PrevPage 1 of 7Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ControlCapmAP18.2Unverified
2GRiT (ViT-B)mAP15.5Unverified
3CAG-NetmAP10.5Unverified
4FCLNmAP5.4Unverified