SOTAVerified

Self-supervised context-aware COVID-19 document exploration through atlas grounding

2020-07-01ACL 2020Code Available0· sign in to hype

Dusan Grujicic, Gorjan Radevski, Tinne Tuytelaars, Matthew Blaschko

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper, we aim to develop a self-supervised grounding of Covid-related medical text based on the actual spatial relationships between the referred anatomical concepts. More specifically, we learn to project sentences into a physical space defined by a three-dimensional anatomical atlas, allowing for a visual approach to navigating Covid-related literature. We design a straightforward and empirically effective training objective to reduce the curated data dependency issue. We use BERT as the main building block of our model and perform a quantitative analysis that demonstrates that the model learns a context-aware mapping. We illustrate two potential use-cases for our approach, one in interactive, 3D data exploration, and the other in document retrieval. To accelerate research in this direction, we make public all trained models, codebase and the developed tools, which can be accessed at https://github.com/gorjanradevski/macchina/.

Tasks

Reproductions