arXiv Open Access 2023

UMDFood: Vision-language models boost food composition compilation

Peihua Ma Yixin Wu Ning Yu Yang Zhang Michael Backes +2 lainnya
Lihat Sumber

Abstrak

Nutrition information is crucial in precision nutrition and the food industry. The current food composition compilation paradigm relies on laborious and experience-dependent methods. However, these methods struggle to keep up with the dynamic consumer market, resulting in delayed and incomplete nutrition data. In addition, earlier machine learning methods overlook the information in food ingredient statements or ignore the features of food images. To this end, we propose a novel vision-language model, UMDFood-VL, using front-of-package labeling and product images to accurately estimate food composition profiles. In order to empower model training, we established UMDFood-90k, the most comprehensive multimodal food database to date, containing 89,533 samples, each labeled with image and text-based ingredient descriptions and 11 nutrient annotations. UMDFood-VL achieves the macro-AUCROC up to 0.921 for fat content estimation, which is significantly higher than existing baseline methods and satisfies the practical requirements of food composition compilation. Meanwhile, up to 82.2% of selected products' estimated error between chemical analysis results and model estimation results are less than 10%. This performance sheds light on generalization towards other food and nutrition-related data compilation and catalyzation for the evolution of generative AI-based technology in other food applications that require personalization.

Topik & Kata Kunci

Penulis (7)

P

Peihua Ma

Y

Yixin Wu

N

Ning Yu

Y

Yang Zhang

M

Michael Backes

Q

Qin Wang

C

Cheng-I Wei

Format Sitasi

Ma, P., Wu, Y., Yu, N., Zhang, Y., Backes, M., Wang, Q. et al. (2023). UMDFood: Vision-language models boost food composition compilation. https://arxiv.org/abs/2306.01747

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2023
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓