SOTAVerified

Multi-modal Image Processing based on Coupled Dictionary Learning

2018-06-26Unverified0· sign in to hype

Pingfan Song, Miguel R. D. Rodrigues

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In real-world scenarios, many data processing problems often involve heterogeneous images associated with different imaging modalities. Since these multimodal images originate from the same phenomenon, it is realistic to assume that they share common attributes or characteristics. In this paper, we propose a multi-modal image processing framework based on coupled dictionary learning to capture similarities and disparities between different image modalities. In particular, our framework can capture favorable structure similarities across different image modalities such as edges, corners, and other elementary primitives in a learned sparse transform domain, instead of the original pixel domain, that can be used to improve a number of image processing tasks such as denoising, inpainting, or super-resolution. Practical experiments demonstrate that incorporating multimodal information using our framework brings notable benefits.

Tasks

Reproductions