SOTAVerified

Caption Generation

Papers

Showing 51100 of 310 papers

TitleStatusHype
Improving Image Captioning by Leveraging Intra- and Inter-layer Global Representation in Transformer NetworkCode1
TAP: Text-Aware Pre-training for Text-VQA and Text-CaptionCode1
Improving Image Captioning with Better Use of CaptionsCode1
Say As You Wish: Fine-grained Control of Image Caption Generation with Abstract Scene GraphsCode1
Deep Reinforcement Learning For Sequence to Sequence ModelsCode1
Grad-CAM++: Improved Visual Explanations for Deep Convolutional NetworksCode1
Frame- and Segment-Level Features and Candidate Pool Evaluation for Video Caption GenerationCode1
Video captioning with recurrent networks based on frame- and video-level features and visual content classificationCode1
Microsoft COCO Captions: Data Collection and Evaluation ServerCode1
Show, Attend and Tell: Neural Image Caption Generation with Visual AttentionCode1
GNN-ViTCap: GNN-Enhanced Multiple Instance Learning with Vision Transformers for Whole Slide Image Classification and Captioning0
EditInspector: A Benchmark for Evaluation of Text-Guided Image Edits0
Attention-based transformer models for image captioning across languages: An in-depth survey and evaluation0
NEXT: Multi-Grained Mixture of Experts via Text-Modulation for Multi-Modal Object Re-ID0
GC-KBVQA: A New Four-Stage Framework for Enhancing Knowledge Based Visual Question Answering Performance0
Temporal Object Captioning for Street Scene Videos from LiDAR Tracks0
Vision-Language Modeling Meets Remote Sensing: Models, Datasets and Perspectives0
TimeSoccer: An End-to-End Multimodal Large Language Model for Soccer Commentary Generation0
Low-hallucination Synthetic Captions for Large-Scale Vision-Language Model Pre-training0
3D CoCa: Contrastive Learners are 3D CaptionersCode0
Group-based Distinctive Image Captioning with Memory Difference Encoding and Attention0
Identifying Multi-modal Knowledge Neurons in Pretrained Transformers via Two-stage Filtering0
LaPIG: Cross-Modal Generation of Paired Thermal and Visible Facial Images0
IDEA: Inverted Text with Cooperative Deformable Aggregation for Multi-modal Object Re-Identification0
Integrating Frequency-Domain Representations with Low-Rank Adaptation in Vision-Language Models0
Fine-Grained Video Captioning through Scene Graph Consolidation0
LongCaptioning: Unlocking the Power of Long Caption Generation in Large Multimodal Models0
Enhancing Chest X-ray Classification through Knowledge Injection in Cross-Modality Learning0
FE-LWS: Refined Image-Text Representations via Decoder Stacking and Fused Encodings for Remote Sensing Image Captioning0
Expertized Caption Auto-Enhancement for Video-Text RetrievalCode0
Do Large Multimodal Models Solve Caption Generation for Scientific Figures? Lessons Learned from SCICAP Challenge 20230
MAMS: Model-Agnostic Module Selection Framework for Video Captioning0
Measuring and Mitigating Hallucinations in Vision-Language Dataset Generation for Remote Sensing0
Understanding How Paper Writers Use AI-Generated Captions in Figure Caption Writing0
Multi-LLM Collaborative Caption Generation in Scientific DocumentsCode0
Time Series Language Model for Descriptive Caption Generation0
Unleashing Text-to-Image Diffusion Prior for Zero-Shot Image Captioning0
Multimodal Preference Data Synthetic Alignment with Reward ModelCode0
Learning from Massive Human Videos for Universal Humanoid Pose Control0
From Simple to Professional: A Combinatorial Controllable Image Captioning AgentCode0
DIR: Retrieval-Augmented Image Captioning with Comprehensive Understanding0
Benchmarking Multimodal Models for Ukrainian Language Understanding Across Academic and Cultural Domains0
Everything is a Video: Unifying Modalities through Next-Frame Prediction0
Grounded Video Caption Generation0
SLAM-AAC: Enhancing Audio Captioning with Paraphrasing Augmentation and CLAP-Refine through LLMsCode0
GEM-VPC: A dual Graph-Enhanced Multimodal integration for Video Paragraph Captioning0
EzAudio: Enhancing Text-to-Audio Generation with Efficient Diffusion Transformer0
CoVLA: Comprehensive Vision-Language-Action Dataset for Autonomous Driving0
Mol2Lang-VLM: Vision- and Text-Guided Generative Pre-trained Language Models for Advancing Molecule Captioning through Multimodal FusionCode0
See It All: Contextualized Late Aggregation for 3D Dense Captioning0
Show:102550
← PrevPage 2 of 7Next →

No leaderboard results yet.