SOTAVerified

Towards the Human Global Context: Does the Vision-Language Model Really Judge Like a Human Being?

2022-07-18Unverified0· sign in to hype

Sangmyeong Woh, Jaemin Lee, Ho joong Kim, Jinsuk Lee

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

As computer vision and NLP make progress, Vision-Language(VL) is becoming an important area of research. Despite the importance, evaluation metrics of the research domain is still at a preliminary stage of development. In this paper, we propose a quantitative metric "Equivariance Score" and evaluation dataset "Human Puzzle" to assess whether a VL model is understanding an image like a human. We observed that the VL model does not interpret the overall context of an input image but instead shows biases toward a specific object or shape that forms the local context. We aim to quantitatively measure a model's performance in understanding context. To verify the current existing VL model's capability, we sliced the original input image into pieces and randomly placed them, distorting the global context of the image. Our paper discusses each VL model's level of interpretation on global context and addresses how the structural characteristics influenced the results.

Tasks

Reproductions