SOTAVerified

Joint Patch and Multi-Label Learning for Facial Action Unit Detection

2015-06-01CVPR 2015Unverified0· sign in to hype

Kaili Zhao, Wen-Sheng Chu, Fernando de la Torre, Jeffrey F. Cohn, Honggang Zhang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The face is one of the most powerful channel of non-verbal communication. The most commonly used taxonomy to describe facial behaviour is the Facial Action Coding System (FACS). FACS segments the visible effects of facial muscle activation into 30+ action units (AUs). AUs, which may occur alone and in thousands of combinations, can describe nearly all-possible facial expressions. Most existing methods for automatic AU detection treat the problem using one-vs-all classifiers and fail to exploit dependencies among AU and facial features. We introduce joint-patch and multi-label learning (JPML) to address these issues. JPML leverages group sparsity by selecting a sparse subset of facial patches while learning a multi-label classifier. In four of five comparisons on three diverse datasets, CK+, GFT, and BP4D, JPML produced the highest average F1 scores in comparison with state-of-the art.

Tasks

Reproductions