SOTAVerified

Evaluating Pretrained Transformer Models for Entity Linking inTask-Oriented Dialog

2021-12-01ICON 2021Code Available0· sign in to hype

Sai Muralidhar Jayanthi, Varsha Embar, Karthik Raghunathan

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The wide applicability of pretrained transformer models (PTMs) for natural language tasks is well demonstrated, but their ability to comprehend short phrases of text is less explored. To this end, we evaluate different PTMs from the lens of unsupervised Entity Linking in task-oriented dialog across 5 characteristics– syntactic, semantic, short-forms, numeric and phonetic. Our results demonstrate that several of the PTMs produce sub-par results when compared to traditional techniques, albeit competitive to other neural baselines. We find that some of their shortcomings can be addressed by using PTMs fine-tuned for text-similarity tasks, which illustrate an improved ability in comprehending semantic and syntactic correspondences, as well as some improvements for short-forms, numeric and phonetic variations in entity mentions. We perform qualitative analysis to understand nuances in their predictions and discuss scope for further improvements.

Tasks

Reproductions