SOTAVerified

Visual grounding for desktop graphical user interfaces

2024-05-05Unverified0· sign in to hype

Tassnim Dardouri, Laura Minkova, Jessica López Espejel, Walid Dahhane, El Hassane Ettifouri

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Most instance perception and image understanding solutions focus mainly on natural images. However, applications for synthetic images, and more specifically, images of Graphical User Interfaces (GUI) remain limited. This hinders the development of autonomous computer-vision-powered Artificial Intelligence (AI) agents. In this work, we present Instruction Visual Grounding or IVG, a multi-modal solution for object identification in a GUI. More precisely, given a natural language instruction and GUI screen, IVG locates the coordinates of the element on the screen where the instruction would be executed. To this end, we develop two methods. The first method is a three-part architecture that relies on a combination of a Large Language Model (LLM) and an object detection model. The second approach uses a multi-modal foundation model.

Tasks

Reproductions