SOTAVerified

LG-Eval: A Toolkit for Creating Online Language Evaluation Experiments

2012-05-01LREC 2012Unverified0· sign in to hype

Eric Kow, Anja Belz

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper we describe the LG-Eval toolkit for creating online language evaluation experiments. LG-Eval is the direct result of our work setting up and carrying out the human evaluation experiments in several of the Generation Challenges shared tasks. It provides tools for creating experiments with different kinds of rating tools, allocating items to evaluators, and collecting the evaluation scores.

Tasks

Reproductions