[CCP18] Cross-modal Retrieval in the Cooking Context: Learning Semantic Text-Image Embeddings
Conférence Internationale avec comité de lecture :
41st International ACM SIGIR Conference on Research and Development in Information Retrieval ,
July 2018,
Ann Arbor,
USA,
Mots clés: Deep learning, Text-image retrieval, Multi-Modal Embeddings
Résumé:
Designing powerful tools that support cooking activities has rapidly
gained popularity due to the massive amounts of available data,
as well as recent advances in machine learning that are capable of
analyzing them. In this paper, we propose a cross-modal retrieval
model aligning visual and textual data (like pictures of dishes and
their recipes) in a shared representation space. We describe an
effective learning scheme, capable of tackling large-scale problems,
and validate it on the Recipe1M dataset containing nearly 1 million
picture-recipe pairs. We show the effectiveness of our approach
regarding previous state-of-the-art models and present qualitative
results over computational cooking use cases.