SOTAVerified

Layered Insights: Generalizable Analysis of Authorial Style by Leveraging All Transformer Layers

2025-03-02Unverified0· sign in to hype

Milad Alshomary, Nikhil Reddy Varimalla, Vishal Anand, Kathleen McKeown

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We propose a new approach for the authorship attribution task that leverages the various linguistic representations learned at different layers of pre-trained transformer-based models. We evaluate our approach on three datasets, comparing it to a state-of-the-art baseline in in-domain and out-of-domain scenarios. We found that utilizing various transformer layers improves the robustness of authorship attribution models when tested on out-of-domain data, resulting in new state-of-the-art results. Our analysis gives further insights into how our model's different layers get specialized in representing certain stylistic features that benefit the model when tested out of the domain.

Tasks

Reproductions