SOTAVerified

Image Captioning

Image Captioning is the task of describing the content of an image in words. This task lies at the intersection of computer vision and natural language processing. Most image captioning systems use an encoder-decoder framework, where an input image is encoded into an intermediate representation of the information in the image, and then decoded into a descriptive text sequence. The most popular benchmarks are nocaps and COCO, and models are typically evaluated according to a BLEU or CIDER metric.

( Image credit: Reflective Decoding Network for Image Captioning, ICCV'19)

Papers

Showing 110 of 1878 papers

Show:102550
← PrevPage 1 of 188Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1BLIP-2 ViT-G FlanT5 XL (zero-shot)CIDEr121.6Unverified
2BLIP-2 ViT-G OPT 6.7B (zero-shot)CIDEr121Unverified
3BLIP-2 ViT-G OPT 2.7B (zero-shot)CIDEr119.7Unverified
4LEMON_largeCIDEr113.4Unverified
5BLIP_ViT-LCIDEr113.2Unverified
6SimVLMCIDEr112.2Unverified
7BLIP_CapFilt-LCIDEr109.6Unverified
8OmniVLCIDEr107.5Unverified
9VinVLCIDEr95.5Unverified
10Enc-DecCIDEr90.2Unverified