SOTAVerified

OpenSLU: A Unified, Modularized, and Extensible Toolkit for Spoken Language Understanding

2023-05-17Code Available1· sign in to hype

Libo Qin, Qiguang Chen, Xiao Xu, Yunlong Feng, Wanxiang Che

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Spoken Language Understanding (SLU) is one of the core components of a task-oriented dialogue system, which aims to extract the semantic meaning of user queries (e.g., intents and slots). In this work, we introduce OpenSLU, an open-source toolkit to provide a unified, modularized, and extensible toolkit for spoken language understanding. Specifically, OpenSLU unifies 10 SLU models for both single-intent and multi-intent scenarios, which support both non-pretrained and pretrained models simultaneously. Additionally, OpenSLU is highly modularized and extensible by decomposing the model architecture, inference, and learning process into reusable modules, which allows researchers to quickly set up SLU experiments with highly flexible configurations. OpenSLU is implemented based on PyTorch, and released at https://github.com/LightChen233/OpenSLU.

Tasks

Reproductions