food recognition
Recently Published Documents


TOTAL DOCUMENTS

179
(FIVE YEARS 92)

H-INDEX

20
(FIVE YEARS 5)

2022 ◽  
Vol 8 ◽  
Author(s):  
Zhongkui Wang ◽  
Shinichi Hirai ◽  
Sadao Kawamura

Despite developments in robotics and automation technologies, several challenges need to be addressed to fulfill the high demand for automating various manufacturing processes in the food industry. In our opinion, these challenges can be classified as: the development of robotic end-effectors to cope with large variations of food products with high practicality and low cost, recognition of food products and materials in 3D scenario, better understanding of fundamental information of food products including food categorization and physical properties from the viewpoint of robotic handling. In this review, we first introduce the challenges in robotic food handling and then highlight the advances in robotic end-effectors, food recognition, and fundamental information of food products related to robotic food handling. Finally, future research directions and opportunities are discussed based on an analysis of the challenges and state-of-the-art developments.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Ying Wang ◽  
Jianbo Wu ◽  
Hui Deng ◽  
Xianghui Zeng

With the development of machine learning, as a branch of machine learning, deep learning has been applied in many fields such as image recognition, image segmentation, video segmentation, and so on. In recent years, deep learning has also been gradually applied to food recognition. However, in the field of food recognition, the degree of complexity is high, the situation is complex, and the accuracy and speed of recognition are worrying. This paper tries to solve the above problems and proposes a food image recognition method based on neural network. Combining Tiny-YOLO and twin network, this method proposes a two-stage learning mode of YOLO-SIMM and designs two versions of YOLO-SiamV1 and YOLO-SiamV2. Through experiments, this method has a general recognition accuracy. However, there is no need for manual marking, and it has a good development prospect in practical popularization and application. In addition, a method for foreign body detection and recognition in food is proposed. This method can effectively separate foreign body from food by threshold segmentation technology. Experimental results show that this method can effectively distinguish desiccant from foreign matter and achieve the desired effect.


Healthcare ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 1676
Author(s):  
Ghalib Ahmed Tahir ◽  
Chu Kiong Loo

Dietary studies showed that dietary problems such as obesity are associated with other chronic diseases, including hypertension, irregular blood sugar levels, and increased risk of heart attacks. The primary cause of these problems is poor lifestyle choices and unhealthy dietary habits, which are manageable using interactive mHealth apps. However, traditional dietary monitoring systems using manual food logging suffer from imprecision, underreporting, time consumption, and low adherence. Recent dietary monitoring systems tackle these challenges by automatic assessment of dietary intake through machine learning methods. This survey discusses the best-performing methodologies that have been developed so far for automatic food recognition and volume estimation. Firstly, the paper presented the rationale of visual-based methods for food recognition. Then, the core of the study is the presentation, discussion, and evaluation of these methods based on popular food image databases. In this context, this study discusses the mobile applications that are implementing these methods for automatic food logging. Our findings indicate that around 66.7% of surveyed studies use visual features from deep neural networks for food recognition. Similarly, all surveyed studies employed a variant of convolutional neural networks (CNN) for ingredient recognition due to recent research interest. Finally, this survey ends with a discussion of potential applications of food image analysis, existing research gaps, and open issues of this research area. Learning from unlabeled image datasets in an unsupervised manner, catastrophic forgetting during continual learning, and improving model transparency using explainable AI are potential areas of interest for future studies.


2021 ◽  
Author(s):  
Analyn N. Yumang ◽  
Dave Emilson S. Banguilan ◽  
Clark Kent S. Veneracion

2021 ◽  
Author(s):  
Yuita Arum Sari ◽  
Sigit Adinugroho ◽  
Jaya Mahar Maligan ◽  
Ersya Nadia Candra ◽  
Fitri Utaminingrum ◽  
...  

2021 ◽  
Author(s):  
Thuan Trong Nguyen ◽  
Thuan Q. Nguyen ◽  
Dung Vo ◽  
Vi Nguyen ◽  
Ngoc Ho ◽  
...  
Keyword(s):  

Author(s):  
Abdulnaser Fakhrou ◽  
Jayakanth Kunhoth ◽  
Somaya Al Maadeed

AbstractPeople with blindness or low vision utilize mobile assistive tools for various applications such as object recognition, text recognition, etc. Most of the available applications are focused on recognizing generic objects. And they have not addressed the recognition of food dishes and fruit varieties. In this paper, we propose a smartphone-based system for recognizing the food dishes as well as fruits for children with visual impairments. The Smartphone application utilizes a trained deep CNN model for recognizing the food item from the real-time images. Furthermore, we develop a new deep convolutional neural network (CNN) model for food recognition using the fusion of two CNN architectures. The new deep CNN model is developed using the ensemble learning approach. The deep CNN food recognition model is trained on a customized food recognition dataset.The customized food recognition dataset consists of 29 varieties of food dishes and fruits. Moreover, we analyze the performance of multiple state of art deep CNN models for food recognition using the transfer learning approach. The ensemble model performed better than state of art CNN models and achieved a food recognition accuracy of 95.55 % in the customized food dataset. In addition to that, the proposed deep CNN model is evaluated in two publicly available food datasets to display its efficacy for food recognition tasks.


Sign in / Sign up

Export Citation Format

Share Document