SOTAVerified

Cross-Modal Retrieval in the Cooking Context: Learning Semantic Text-Image Embeddings

2018-04-30Code Available0· sign in to hype

Micael Carvalho, Rémi Cadène, David Picard, Laure Soulier, Nicolas Thome, Matthieu Cord

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Designing powerful tools that support cooking activities has rapidly gained popularity due to the massive amounts of available data, as well as recent advances in machine learning that are capable of analyzing them. In this paper, we propose a cross-modal retrieval model aligning visual and textual data (like pictures of dishes and their recipes) in a shared representation space. We describe an effective learning scheme, capable of tackling large-scale problems, and validate it on the Recipe1M dataset containing nearly 1 million picture-recipe pairs. We show the effectiveness of our approach regarding previous state-of-the-art models and present qualitative results over computational cooking use cases.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Recipe1M+AdaMineImage-to-text R@139.8Unverified

Reproductions