SOTAVerified

Assessing SRL Frameworks with Automatic Training Data Expansion

2017-04-01WS 2017Unverified0· sign in to hype

Silvana Hartmann, {\'E}va M{\'u}jdricza-Maydt, Ilia Kuznetsov, Iryna Gurevych, Anette Frank

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We present the first experiment-based study that explicitly contrasts the three major semantic role labeling frameworks. As a prerequisite, we create a dataset labeled with parallel FrameNet-, PropBank-, and VerbNet-style labels for German. We train a state-of-the-art SRL tool for German for the different annotation styles and provide a comparative analysis across frameworks. We further explore the behavior of the frameworks with automatic training data generation. VerbNet provides larger semantic expressivity than PropBank, and we find that its generalization capacity approaches PropBank in SRL training, but it benefits less from training data expansion than the sparse-data affected FrameNet.

Tasks

Reproductions