SOTAVerified

Extracting Syntactic Trees from Transformer Encoder Self-Attentions

2018-11-01WS 2018Unverified0· sign in to hype

David Mare{\v{c}}ek, Rudolf Rosa

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This is a work in progress about extracting the sentence tree structures from the encoder's self-attention weights, when translating into another language using the Transformer neural network architecture. We visualize the structures and discuss their characteristics with respect to the existing syntactic theories and annotations.

Tasks

Reproductions