scholarly journals Food Image Recognition and Food Safety Detection Method Based on Deep Learning

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Ying Wang ◽  
Jianbo Wu ◽  
Hui Deng ◽  
Xianghui Zeng

With the development of machine learning, as a branch of machine learning, deep learning has been applied in many fields such as image recognition, image segmentation, video segmentation, and so on. In recent years, deep learning has also been gradually applied to food recognition. However, in the field of food recognition, the degree of complexity is high, the situation is complex, and the accuracy and speed of recognition are worrying. This paper tries to solve the above problems and proposes a food image recognition method based on neural network. Combining Tiny-YOLO and twin network, this method proposes a two-stage learning mode of YOLO-SIMM and designs two versions of YOLO-SiamV1 and YOLO-SiamV2. Through experiments, this method has a general recognition accuracy. However, there is no need for manual marking, and it has a good development prospect in practical popularization and application. In addition, a method for foreign body detection and recognition in food is proposed. This method can effectively separate foreign body from food by threshold segmentation technology. Experimental results show that this method can effectively distinguish desiccant from foreign matter and achieve the desired effect.

2021 ◽  
Vol 2083 (4) ◽  
pp. 042007
Author(s):  
Xiaowen Liu ◽  
Juncheng Lei

Abstract Image recognition technology mainly includes image feature extraction and classification recognition. Feature extraction is the key link, which determines whether the recognition performance is good or bad. Deep learning builds a model by building a hierarchical model structure like the human brain, extracting features layer by layer from the data. Applying deep learning to image recognition can further improve the accuracy of image recognition. Based on the idea of clustering, this article establishes a multi-mix Gaussian model for engineering image information in RGB color space through offline learning and expectation-maximization algorithms, to obtain a multi-mix cluster representation of engineering image information. Then use the sparse Gaussian machine learning model on the YCrCb color space to quickly learn the distribution of engineering images online, and design an engineering image recognizer based on multi-color space information.


2020 ◽  
Vol 37 (9) ◽  
pp. 1661-1668
Author(s):  
Min Wang ◽  
Shudao Zhou ◽  
Zhong Yang ◽  
Zhanhua Liu

AbstractConventional classification methods are based on artificial experience to extract features, and each link is independent, which is a kind of “shallow learning.” As a result, the scope of the cloud category applied by this method is limited. In this paper, we propose a new convolutional neural network (CNN) with deep learning ability, called CloudA, for the ground-based cloud image recognition method. We use the Singapore Whole-Sky Imaging Categories (SWIMCAT) sample library and total-sky sample library to train and test CloudA. In particular, we visualize the cloud features captured by CloudA using the TensorBoard visualization method, and these features can help us to understand the process of ground-based cloud classification. We compare this method with other commonly used methods to explore the feasibility of using CloudA to classify ground-based cloud images, and the evaluation of a large number of experiments show that the average accuracy of this method is nearly 98.63% for ground-based cloud classification.


2020 ◽  
Author(s):  
dongshen ji ◽  
yanzhong zhao ◽  
zhujun zhang ◽  
qianchuan zhao

In view of the large demand for new coronary pneumonia covid19 image recognition samples,the recognition accuracy is not ideal.In this paper,a new coronary pneumonia positive image recognition method proposed based on small sample recognition. First, the CT image pictures are preprocessed, and the pictures are converted into the picture formats which are required for transfer learning. Secondly, perform small-sample image enhancement and expansion on the converted picture, such as miscut transformation, random rotation and translation, etc.. Then, multiple migration models are used to extract features and then perform feature fusion. Finally,the model is adjusted by fine-tuning.Then train the model to obtain experimental results. The experimental results show that our method has excellent recognition performance in the recognition of new coronary pneumonia images,even with only a small number of CT image samples.


Author(s):  
Meng Xiao ◽  
Haibo Yi

According to the survey, off-line examination is still the main examination method in universities, primary and secondary schools. The grading processing of off-line examination is time-consuming. Besides, since the off-line grading is subjective, it is error-prone. In order to address the challenges in off-line examinations of universities, primary and secondary schools, it is very urgent to improve the efficiency of off-line grading. In order to realize intelligent grading for off-line examinations, we exploit deep learning techniques to off-line grading. First, we propose an image processing method for English letters. Second, we propose a image recognition method based on deep learning for English letters. Third, we propose a lightweight framework for grading. Based on the above designs, we design an intelligent grading system based on deep learning. We implement the system and the result shows that the intelligent grading system can batch grading efficiently. Besides, compared with related designs, the proposed system is more flexible and intelligent.


2017 ◽  
Vol 116 ◽  
pp. 612-620 ◽  
Author(s):  
Stanley Giovany ◽  
Andre Putra ◽  
Agus S Hariawan ◽  
Lili A Wulandhari

Diversity ◽  
2020 ◽  
Vol 12 (1) ◽  
pp. 29 ◽  
Author(s):  
Alina Raphael ◽  
Zvy Dubinsky ◽  
David Iluz ◽  
Nathan S. Netanyahu

We present thorough this review the developments in the field, point out their current limitations, and outline its timelines and unique potential. In order to do so we introduce the methods used in each of the advances in the application of deep learning (DL) to coral research that took place between the years: 2016–2018. DL has unique capability of streamlining the description, analysis, and monitoring of coral reefs, saving time, and obtaining higher reliability and accuracy compared with error-prone human performance. Coral reefs are the most diverse and complex of marine ecosystems, undergoing a severe decline worldwide resulting from the adverse synergistic influences of global climate change, ocean acidification, and seawater warming, exacerbated by anthropogenic eutrophication and pollution. DL is an extension of some of the concepts originating from machine learning that join several multilayered neural networks. Machine learning refers to algorithms that automatically detect patterns in data. In the case of corals these data are underwater photographic images. Based on “learned” patterns, such programs can recognize new images. The novelty of DL is in the use of state-of-art computerized image analyses technologies, and its fully automated methodology of dealing with large data sets of images. Automated Image recognition refers to technologies that identify and detect objects or attributes in a digital video or image automatically. Image recognition classifies data into selected categories out of many. We show that Neural Network methods are already reliable in distinguishing corals from other benthos and non-coral organisms. Automated recognition of live coral cover is a powerful indicator of reef response to slow and transient changes in the environment. Improving automated recognition of coral species, DL methods already recognize decline of coral diversity due to natural and anthropogenic stressors. Diversity indicators can document the effectiveness of reef bioremediation initiatives. We explored the current applications of deep learning for corals and benthic image classification by discussing the most recent studies conducted by researchers. We review the developments in the field, point out their current limitations, and outline their timelines and unique potential. We also discussed a few future research directions in the fields of deep learning. Future needs are the age detection of single species, in order to track trends in their population recruitment, decline, and recovery. Fine resolution, at the polyp level, is still to be developed, in order to allow separation of species with similar macroscopic features. That refinement of DL will allow such comparisons and their analyses. We conclude that the usefulness of future, more refined automatic identification will allow reef comparison, and tracking long term changes in species diversity. The hitherto unused addition of intraspecific coral color parameters, will add the inclusion of physiological coral responses to environmental conditions and change thereof. The core aim of this review was to underscore the strength and reliability of the DL approach for documenting coral reef features based on an evaluation of the currently available published uses of this method. We expect that this review will encourage researchers from computer vision and marine societies to collaborate on similar long-term joint ventures.


Sign in / Sign up

Export Citation Format

Share Document