SOTAVerified

VTCC-NLP at NL4Opt competition subtask 1: An Ensemble Pre-trained language models for Named Entity Recognition

2022-12-14Unverified0· sign in to hype

Xuan-Dung Doan

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We propose a combined three pre-trained language models (XLM-R, BART, and DeBERTa-V3) as an empower of contextualized embedding for named entity recognition. Our model achieves a 92.9% F1 score on the test set and ranks 5th on the leaderboard at NL4Opt competition subtask 1.

Tasks

Reproductions