Pretrained Convolutional Neural Networks Perform Well in a Challenging Test Case: Identification of Plant Bugs (Hemiptera: Miridae) Using a Small Number of Training Images

2021 ◽  
Vol 5 (2) ◽  
Author(s):  
Alexander Knyshov ◽  
Samantha Hoang ◽  
Christiane Weirauch

Abstract Automated insect identification systems have been explored for more than two decades but have only recently started to take advantage of powerful and versatile convolutional neural networks (CNNs). While typical CNN applications still require large training image datasets with hundreds of images per taxon, pretrained CNNs recently have been shown to be highly accurate, while being trained on much smaller datasets. We here evaluate the performance of CNN-based machine learning approaches in identifying three curated species-level dorsal habitus datasets for Miridae, the plant bugs. Miridae are of economic importance, but species-level identifications are challenging and typically rely on information other than dorsal habitus (e.g., host plants, locality, genitalic structures). Each dataset contained 2–6 species and 126–246 images in total, with a mean of only 32 images per species for the most difficult dataset. We find that closely related species of plant bugs can be identified with 80–90% accuracy based on their dorsal habitus alone. The pretrained CNN performed 10–20% better than a taxon expert who had access to the same dorsal habitus images. We find that feature extraction protocols (selection and combination of blocks of CNN layers) impact identification accuracy much more than the classifying mechanism (support vector machine and deep neural network classifiers). While our network has much lower accuracy on photographs of live insects (62%), overall results confirm that a pretrained CNN can be straightforwardly adapted to collection-based images for a new taxonomic group and successfully extract relevant features to classify insect species.

2019 ◽  
Vol 277 ◽  
pp. 02024 ◽  
Author(s):  
Lincan Li ◽  
Tong Jia ◽  
Tianqi Meng ◽  
Yizhe Liu

In this paper, an accurate two-stage deep learning method is proposed to detect vulnerable plaques in ultrasonic images of cardiovascular. Firstly, a Fully Convonutional Neural Network (FCN) named U-Net is used to segment the original Intravascular Optical Coherence Tomography (IVOCT) cardiovascular images. We experiment on different threshold values to find the best threshold for removing noise and background in the original images. Secondly, a modified Faster RCNN is adopted to do precise detection. The modified Faster R-CNN utilize six-scale anchors (122,162,322,642,1282,2562) instead of the conventional one scale or three scale approaches. First, we present three problems in cardiovascular vulnerable plaque diagnosis, then we demonstrate how our method solve these problems. The proposed method in this paper apply deep convolutional neural networks to the whole diagnostic procedure. Test results show the Recall rate, Precision rate, IoU (Intersection-over-Union) rate and Total score are 0.94, 0.885, 0.913 and 0.913 respectively, higher than the 1st team of CCCV2017 Cardiovascular OCT Vulnerable Plaque Detection Challenge. AP of the designed Faster RCNN is 83.4%, higher than conventional approaches which use one-scale or three-scale anchors. These results demonstrate the superior performance of our proposed method and the power of deep learning approaches in diagnose cardiovascular vulnerable plaques.


Information ◽  
2021 ◽  
Vol 12 (9) ◽  
pp. 361
Author(s):  
Handan Hou ◽  
Wei Shi ◽  
Jinyan Guo ◽  
Zhe Zhang ◽  
Weizheng Shen ◽  
...  

Individual identification of dairy cows based on computer vision technology shows strong performance and practicality. Accurate identification of each dairy cow is the prerequisite of artificial intelligence technology applied in smart animal husbandry. While the rump of each dairy cow also has lots of important features, so do the back and head, which are also important for individual recognition. In this paper, we propose a non-contact cow rump identification method based on convolutional neural networks. First, the rump image sequences of the cows while feeding were collected. Then, an object detection model was applied to detect the cow rump object in each frame of image. Finally, a fine-tuned convolutional neural network model was trained to identify cow rumps. An image dataset containing 195 different cows was created to validate the proposed method. The method achieved an identification accuracy of 99.76%, which showed a better performance compared to other related methods and a good potential in the actual production environment of cow husbandry, and the model is light enough to be deployed in an edge-computing device.


2020 ◽  
Vol 10 (14) ◽  
pp. 4916
Author(s):  
Syna Sreng ◽  
Noppadol Maneerat ◽  
Kazuhiko Hamamoto ◽  
Khin Yadanar Win

Glaucoma is a major global cause of blindness. As the symptoms of glaucoma appear, when the disease reaches an advanced stage, proper screening of glaucoma in the early stages is challenging. Therefore, regular glaucoma screening is essential and recommended. However, eye screening is currently subjective, time-consuming and labor-intensive and there are insufficient eye specialists available. We present an automatic two-stage glaucoma screening system to reduce the workload of ophthalmologists. The system first segmented the optic disc region using a DeepLabv3+ architecture but substituted the encoder module with multiple deep convolutional neural networks. For the classification stage, we used pretrained deep convolutional neural networks for three proposals (1) transfer learning and (2) learning the feature descriptors using support vector machine and (3) building ensemble of methods in (1) and (2). We evaluated our methods on five available datasets containing 2787 retinal images and found that the best option for optic disc segmentation is a combination of DeepLabv3+ and MobileNet. For glaucoma classification, an ensemble of methods performed better than the conventional methods for RIM-ONE, ORIGA, DRISHTI-GS1 and ACRIMA datasets with the accuracy of 97.37%, 90.00%, 86.84% and 99.53% and Area Under Curve (AUC) of 100%, 92.06%, 91.67% and 99.98%, respectively, and performed comparably with CUHKMED, the top team in REFUGE challenge, using REFUGE dataset with an accuracy of 95.59% and AUC of 95.10%.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Kai Kiwitz ◽  
Christian Schiffer ◽  
Hannah Spitzer ◽  
Timo Dickscheid ◽  
Katrin Amunts

AbstractThe distribution of neurons in the cortex (cytoarchitecture) differs between cortical areas and constitutes the basis for structural maps of the human brain. Deep learning approaches provide a promising alternative to overcome throughput limitations of currently used cytoarchitectonic mapping methods, but typically lack insight as to what extent they follow cytoarchitectonic principles. We therefore investigated in how far the internal structure of deep convolutional neural networks trained for cytoarchitectonic brain mapping reflect traditional cytoarchitectonic features, and compared them to features of the current grey level index (GLI) profile approach. The networks consisted of a 10-block deep convolutional architecture trained to segment the primary and secondary visual cortex. Filter activations of the networks served to analyse resemblances to traditional cytoarchitectonic features and comparisons to the GLI profile approach. Our analysis revealed resemblances to cellular, laminar- as well as cortical area related cytoarchitectonic features. The networks learned filter activations that reflect the distinct cytoarchitecture of the segmented cortical areas with special regard to their laminar organization and compared well to statistical criteria of the GLI profile approach. These results confirm an incorporation of relevant cytoarchitectonic features in the deep convolutional neural networks and mark them as a valid support for high-throughput cytoarchitectonic mapping workflows.


2020 ◽  
Vol 143 ◽  
pp. 02015
Author(s):  
Li Zherui ◽  
Cai Huiwen

Sea ice classification is one of the important tasks of sea ice monitoring. Accurate extraction of sea ice types is of great significance on sea ice conditions assessment, smooth navigation and safty marine operations. Sentinel-2 is an optical satellite launched by the European Space Agency. High spatial resolution and wide range imaging provide powerful support for sea ice monitoring. However, traditional supervised classification method is difficult to achieve fine results for small sample features. In order to solve the problem, this paper proposed a sea ice extraction method based on deep learning and it was applied to Liaodong Bay in Bohai Sea, China. The convolutional neural network was used to extract and classify the feature of the image from Sentinel-2. The results showed that the overall accuracy of the algorithm was 85.79% which presented a significant improvement compared with the tranditional algorithms, such as minimum distance method, maximum likelihood method, Mahalanobis distance method, and support vector machine method. The method proposed in this paper, which combines convolutional neural networks and high-resolution multispectral data, provides a new idea for remote sensing monitoring of sea ice.


Sign in / Sign up

Export Citation Format

Share Document