AILAB-Udine@SMM4H’22: Limits of Transformers and BERT Ensembles
2022-10-01SMM4H (COLING) 2022Unverified0· sign in to hype
Beatrice Portelli, Simone Scaboro, Emmanuele Chersoni, Enrico Santus, Giuseppe Serra
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
This paper describes the models developed by the AILAB-Udine team for the SMM4H’22 Shared Task. We explored the limits of Transformer based models on text classification, entity extraction and entity normalization, tackling Tasks 1, 2, 5, 6 and 10. The main takeaways we got from participating in different tasks are: the overwhelming positive effects of combining different architectures when using ensemble learning, and the great potential of generative models for term normalization.