Food Image Segmentation for Dietary Assessment

Author(s):  
Joachim Dehais ◽  
Marios Anthimopoulos ◽  
Stavroula Mougiakakou
2021 ◽  
Vol 2 (3) ◽  
pp. 1-17
Author(s):  
Sri Kalyan Yarlagadda ◽  
Daniel Mas Montserrat ◽  
David Güera ◽  
Carol J. Boushey ◽  
Deborah A. Kerr ◽  
...  

Advances in image-based dietary assessment methods have allowed nutrition professionals and researchers to improve the accuracy of dietary assessment, where images of food consumed are captured using smartphones or wearable devices. These images are then analyzed using computer vision methods to estimate energy and nutrition content of the foods. Food image segmentation, which determines the regions in an image where foods are located, plays an important role in this process. Current methods are data dependent and thus cannot generalize well for different food types. To address this problem, we propose a class-agnostic food image segmentation method. Our method uses a pair of eating scene images, one before starting eating and one after eating is completed. Using information from both the before and after eating images, we can segment food images by finding the salient missing objects without any prior information about the food class. We model a paradigm of top-down saliency that guides the attention of the human visual system based on a task to find the salient missing objects in a pair of images. Our method is validated on food images collected from a dietary study that showed promising results.


2015 ◽  
Vol 26 (2) ◽  
pp. 025702 ◽  
Author(s):  
Hsin-Chen Chen ◽  
Wenyan Jia ◽  
Xin Sun ◽  
Zhaoxin Li ◽  
Yuecheng Li ◽  
...  

2020 ◽  
Vol 23 (15) ◽  
pp. 2700-2710
Author(s):  
Tsz-Kiu Chui ◽  
Jindong Tan ◽  
Yan Li ◽  
Hollie A. Raynor

AbstractObjective:To validate an automated food image identification system, DietCam, which has not been validated, in identifying foods with different shapes and complexities from passively taken digital images.Design:Participants wore Sony SmartEyeglass that automatically took three images per second, while two meals containing four foods, representing regular- (i.e., cookies) and irregular-shaped (i.e., chips) foods and single (i.e., grapes) and complex (i.e., chicken and rice) foods, were consumed. Non-blurry images from the meals’ first 5 min were coded by human raters and compared with DietCam results. Comparisons produced four outcomes: true positive (rater/DietCam reports yes for food), false positive (rater reports no food; DietCam reports food), true negative (rater/DietCam reports no food) or false negative (rater reports food; DietCam reports no food).Setting:Laboratory meal.Participants:Thirty men and women (25·1 ± 6·6 years, 22·7 ± 1·6 kg/m2, 46·7 % White).Results:Identification accuracy was 81·2 and 79·7 % in meals A and B, respectively (food and non-food images) and 78·7 and 77·5 % in meals A and B, respectively (food images only). For food images only, no effect of food shape or complexity was found. When different types of images, such as 100 % food in the image and on the plate, <100 % food in the image and on the plate and food not on the plate, were analysed separately, images with food on the plate had a slightly higher accuracy.Conclusions:DietCam shows promise in automated food image identification, and DietCam is most accurate when images show food on the plate.


2019 ◽  
Author(s):  
Stephanie Van Asbroeck ◽  
Christophe Matthys

BACKGROUND In the domain of dietary assessment, there has been an increasing amount of criticism of memory-based techniques such as food frequency questionnaires or 24 hour recalls. One alternative is logging pictures of consumed food followed by an automatic image recognition analysis that provides information on type and amount of food in the picture. However, it is currently unknown how well commercial image recognition platforms perform and whether they could indeed be used for dietary assessment. OBJECTIVE This is a comparative performance study of commercial image recognition platforms. METHODS A variety of foods and beverages were photographed in a range of standardized settings. All pictures (n=185) were uploaded to selected recognition platforms (n=7), and estimates were saved. Accuracy was determined along with totality of the estimate in the case of multiple component dishes. RESULTS Top 1 accuracies ranged from 63% for the application programming interface (API) of the Calorie Mama app to 9% for the Google Vision API. None of the platforms were capable of estimating the amount of food. These results demonstrate that certain platforms perform poorly while others perform decently. CONCLUSIONS Important obstacles to the accurate estimation of food quantity need to be overcome before these commercial platforms can be used as a real alternative for traditional dietary assessment methods.


2020 ◽  
Vol 78 (11) ◽  
pp. 885-900 ◽  
Author(s):  
Birdem Amoutzopoulos ◽  
Polly Page ◽  
Caireen Roberts ◽  
Mark Roe ◽  
Janet Cade ◽  
...  

Abstract Context Overestimation or underestimation of portion size leads to measurement error during dietary assessment. Objective To identify portion size estimation elements (PSEEs) and evaluate their relative efficacy in relation to dietary assessment, and assess the quality of studies validating PSEEs. Data Selection and Extraction Electronic databases, internet sites, and cross-references of published records were searched, generating 16 801 initial records, from which 334 records were reviewed and 542 PSEEs were identified, comprising 5% 1-dimensional tools (eg, food guides), 46% 2-dimensional tools (eg, photographic atlases), and 49% 3-dimensional tools (eg, household utensils). Out of 334 studies, 21 validated a PSEE (compared PSEE to actual food amounts) and 13 compared PSEEs with other PSEEs. Conclusion Quality assessment showed that only a few validation studies were of high quality. According to the findings of validation and comparison studies, food image–based PSEEs were more accurate than food models and household utensils. Key factors to consider when selecting a PSEE include efficiency of the PSEE and its applicability to targeted settings and populations.


Sign in / Sign up

Export Citation Format

Share Document