scholarly journals Use of Different Food Image Recognition Platforms in Dietary Assessment: Comparison Study

10.2196/15602 ◽  
2020 ◽  
Vol 4 (12) ◽  
pp. e15602
Author(s):  
Stephanie Van Asbroeck ◽  
Christophe Matthys

Background In the domain of dietary assessment, there has been an increasing amount of criticism of memory-based techniques such as food frequency questionnaires or 24 hour recalls. One alternative is logging pictures of consumed food followed by an automatic image recognition analysis that provides information on type and amount of food in the picture. However, it is currently unknown how well commercial image recognition platforms perform and whether they could indeed be used for dietary assessment. Objective This is a comparative performance study of commercial image recognition platforms. Methods A variety of foods and beverages were photographed in a range of standardized settings. All pictures (n=185) were uploaded to selected recognition platforms (n=7), and estimates were saved. Accuracy was determined along with totality of the estimate in the case of multiple component dishes. Results Top 1 accuracies ranged from 63% for the application programming interface (API) of the Calorie Mama app to 9% for the Google Vision API. None of the platforms were capable of estimating the amount of food. These results demonstrate that certain platforms perform poorly while others perform decently. Conclusions Important obstacles to the accurate estimation of food quantity need to be overcome before these commercial platforms can be used as a real alternative for traditional dietary assessment methods.


2019 ◽  
Author(s):  
Stephanie Van Asbroeck ◽  
Christophe Matthys

BACKGROUND In the domain of dietary assessment, there has been an increasing amount of criticism of memory-based techniques such as food frequency questionnaires or 24 hour recalls. One alternative is logging pictures of consumed food followed by an automatic image recognition analysis that provides information on type and amount of food in the picture. However, it is currently unknown how well commercial image recognition platforms perform and whether they could indeed be used for dietary assessment. OBJECTIVE This is a comparative performance study of commercial image recognition platforms. METHODS A variety of foods and beverages were photographed in a range of standardized settings. All pictures (n=185) were uploaded to selected recognition platforms (n=7), and estimates were saved. Accuracy was determined along with totality of the estimate in the case of multiple component dishes. RESULTS Top 1 accuracies ranged from 63% for the application programming interface (API) of the Calorie Mama app to 9% for the Google Vision API. None of the platforms were capable of estimating the amount of food. These results demonstrate that certain platforms perform poorly while others perform decently. CONCLUSIONS Important obstacles to the accurate estimation of food quantity need to be overcome before these commercial platforms can be used as a real alternative for traditional dietary assessment methods.



Author(s):  
Chang Liu ◽  
Yu Cao ◽  
Yan Luo ◽  
Guanling Chen ◽  
Vinod Vokkarane ◽  
...  


2021 ◽  
Vol 2 (3) ◽  
pp. 1-17
Author(s):  
Sri Kalyan Yarlagadda ◽  
Daniel Mas Montserrat ◽  
David Güera ◽  
Carol J. Boushey ◽  
Deborah A. Kerr ◽  
...  

Advances in image-based dietary assessment methods have allowed nutrition professionals and researchers to improve the accuracy of dietary assessment, where images of food consumed are captured using smartphones or wearable devices. These images are then analyzed using computer vision methods to estimate energy and nutrition content of the foods. Food image segmentation, which determines the regions in an image where foods are located, plays an important role in this process. Current methods are data dependent and thus cannot generalize well for different food types. To address this problem, we propose a class-agnostic food image segmentation method. Our method uses a pair of eating scene images, one before starting eating and one after eating is completed. Using information from both the before and after eating images, we can segment food images by finding the salient missing objects without any prior information about the food class. We model a paradigm of top-down saliency that guides the attention of the human visual system based on a task to find the salient missing objects in a pair of images. Our method is validated on food images collected from a dietary study that showed promising results.



2021 ◽  
Author(s):  
Sapna Yadav ◽  
Satish Chand

The rapid growth in deep learning has made convolutional neural networks deeper and more complex to realize higher accuracy. But many day-to-day recognition tasks need be performed in a limited computational platform. One of the applications is food image recognition which is very helpful in individual’s health monitoring, dietary assessment, nutrition analysis etc. This task needs small convolutional neural network based engine to do computations fast and accurate. MoblieNetV2 being simple and smaller in size can incorporate easily into small end devices. In this paper, MobileNetV2 and support vector machine are used to classify the food images. Simulation results show that the features extracted from Conv_1 layer, out_relu layer and Conv_1_bn layer of MobileNetV2 and classified using Support Vector Machine have achieved classification accuracies of 84.0%, 87.27% and 83.60% respectively. Because of fewer parameters, smaller size and lesser training time, MobileNetV2 is an excellent choice for real-life recognition tasks.



2018 ◽  
pp. 1-10 ◽  
Author(s):  
Simon Mezgec ◽  
Tome Eftimov ◽  
Tamara Bucher ◽  
Barbara Koroušić Seljak

AbstractObjectiveThe present study tested the combination of an established and a validated food-choice research method (the ‘fake food buffet’) with a new food-matching technology to automate the data collection and analysis.DesignThe methodology combines fake-food image recognition using deep learning and food matching and standardization based on natural language processing. The former is specific because it uses a single deep learning network to perform both the segmentation and the classification at the pixel level of the image. To assess its performance, measures based on the standard pixel accuracy and Intersection over Union were applied. Food matching firstly describes each of the recognized food items in the image and then matches the food items with their compositional data, considering both their food names and their descriptors.ResultsThe final accuracy of the deep learning model trained on fake-food images acquired by 124 study participants and providing fifty-five food classes was 92·18 %, while the food matching was performed with a classification accuracy of 93 %.ConclusionsThe present findings are a step towards automating dietary assessment and food-choice research. The methodology outperforms other approaches in pixel accuracy, and since it is the first automatic solution for recognizing the images of fake foods, the results could be used as a baseline for possible future studies. As the approach enables a semi-automatic description of recognized food items (e.g. with respect to FoodEx2), these can be linked to any food composition database that applies the same classification and description system.



2018 ◽  
Vol 9 (1) ◽  
pp. 24-31
Author(s):  
Rudianto Rudianto ◽  
Eko Budi Setiawan

Availability the Application Programming Interface (API) for third-party applications on Android devices provides an opportunity to monitor Android devices with each other. This is used to create an application that can facilitate parents in child supervision through Android devices owned. In this study, some features added to the classification of image content on Android devices related to negative content. In this case, researchers using Clarifai API. The result of this research is to produce a system which has feature, give a report of image file contained in target smartphone and can do deletion on the image file, receive browser history report and can directly visit in the application, receive a report of child location and can be directly contacted via this application. This application works well on the Android Lollipop (API Level 22). Index Terms— Application Programming Interface(API), Monitoring, Negative Content, Children, Parent.



Sign in / Sign up

Export Citation Format

Share Document