scholarly journals Web-SpikeSegNet: deep learning framework for recognition and counting of spikes from visual images of wheat plants

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Tanuj Misra ◽  
Alka Arora ◽  
Sudeep Marwaha ◽  
Ranjeet Ranjan Jha ◽  
Mrinmoy Ray ◽  
...  
2021 ◽  
Author(s):  
Tanuj Misra ◽  
Alka Arora ◽  
Sudeep Marwaha ◽  
Ranjeet Ranjan Jha ◽  
Mrinmoy Ray ◽  
...  

Abstract Background: Computer vision with deep-learning is emerging as a major approach for non-invasive and non-destructive plant phenotyping. Spikes are the reproductive organs of wheat plants. Detection of spike helps in identifying heading, and counting of the spikes as well as area of the spikes will be useful for determination of the yield of wheat plant. Hence detection and counting of spikes which is considered as the grain bearing organ, has great importance in the phenomics study of large sets of germplasms. Results: In the present study, we developed an online platform “Web-SpikeSegNet” based on a deep-learning framework for spike detection and counting from the wheat plant’s visual images. This platform implements the “SpikeSegNet” approach developed by Misra et al.(2020), which has proved as an effective and robust approach for spike detection and counting. Architecture of the Web-SpikeSegNet consists of 2 layers. First Layer, Client Side Interface Layer, deals with deals with end user’s requests and its corresponding responses management while the second layer, Server Side Application Layer consisting of spike detection and counting module. The backbone of the spike detection module comprises of deep encoder-decoder network with hourglass for spike segmentation. Spike counting module implements the “Analyze Particle” function of imageJ to count the number of spikes. For evaluating the performance of Web-SpikeSegNet, we acquired wheat plant’s visual images using LemnaTec imaging platform installed at Nanaji Deshmukh Plant Phenomics Centre, ICAR-Indian Agricultural Research Institute, New Delhi, India and the satisfactory segmentation performances were obtained as Type I error 0.00159, Type II error 0.0586, Accuracy 99.65%, Precision 99.59% and F1 score 99.65%. Conclusions: In this study, freely available web-based software has been developed based on combined digital image analysis and deep learning techniques. As spike detection and counting in wheat phenotyping are closely related to the yield, Web-SpikeSegNet is a significant step forward in the field of wheat phenotyping and will be very useful to the researchers and students working in the domain.


2020 ◽  
Author(s):  
Raniyaharini R ◽  
Madhumitha K ◽  
Mishaa S ◽  
Virajaravi R

2020 ◽  
Author(s):  
Jinseok Lee

BACKGROUND The coronavirus disease (COVID-19) has explosively spread worldwide since the beginning of 2020. According to a multinational consensus statement from the Fleischner Society, computed tomography (CT) can be used as a relevant screening tool owing to its higher sensitivity for detecting early pneumonic changes. However, physicians are extremely busy fighting COVID-19 in this era of worldwide crisis. Thus, it is crucial to accelerate the development of an artificial intelligence (AI) diagnostic tool to support physicians. OBJECTIVE We aimed to quickly develop an AI technique to diagnose COVID-19 pneumonia and differentiate it from non-COVID pneumonia and non-pneumonia diseases on CT. METHODS A simple 2D deep learning framework, named fast-track COVID-19 classification network (FCONet), was developed to diagnose COVID-19 pneumonia based on a single chest CT image. FCONet was developed by transfer learning, using one of the four state-of-art pre-trained deep learning models (VGG16, ResNet50, InceptionV3, or Xception) as a backbone. For training and testing of FCONet, we collected 3,993 chest CT images of patients with COVID-19 pneumonia, other pneumonia, and non-pneumonia diseases from Wonkwang University Hospital, Chonnam National University Hospital, and the Italian Society of Medical and Interventional Radiology public database. These CT images were split into a training and a testing set at a ratio of 8:2. For the test dataset, the diagnostic performance to diagnose COVID-19 pneumonia was compared among the four pre-trained FCONet models. In addition, we tested the FCONet models on an additional external testing dataset extracted from the embedded low-quality chest CT images of COVID-19 pneumonia in recently published papers. RESULTS Of the four pre-trained models of FCONet, the ResNet50 showed excellent diagnostic performance (sensitivity 99.58%, specificity 100%, and accuracy 99.87%) and outperformed the other three pre-trained models in testing dataset. In additional external test dataset using low-quality CT images, the detection accuracy of the ResNet50 model was the highest (96.97%), followed by Xception, InceptionV3, and VGG16 (90.71%, 89.38%, and 87.12%, respectively). CONCLUSIONS The FCONet, a simple 2D deep learning framework based on a single chest CT image, provides excellent diagnostic performance in detecting COVID-19 pneumonia. Based on our testing dataset, the ResNet50-based FCONet might be the best model, as it outperformed other FCONet models based on VGG16, Xception, and InceptionV3.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Xiaodong Wang ◽  
Ying Chen ◽  
Yunshu Gao ◽  
Huiqing Zhang ◽  
Zehui Guan ◽  
...  

AbstractN-staging is a determining factor for prognostic assessment and decision-making for stage-based cancer therapeutic strategies. Visual inspection of whole-slides of intact lymph nodes is currently the main method used by pathologists to calculate the number of metastatic lymph nodes (MLNs). Moreover, even at the same N stage, the outcome of patients varies dramatically. Here, we propose a deep-learning framework for analyzing lymph node whole-slide images (WSIs) to identify lymph nodes and tumor regions, and then to uncover tumor-area-to-MLN-area ratio (T/MLN). After training, our model’s tumor detection performance was comparable to that of experienced pathologists and achieved similar performance on two independent gastric cancer validation cohorts. Further, we demonstrate that T/MLN is an interpretable independent prognostic factor. These findings indicate that deep-learning models could assist not only pathologists in detecting lymph nodes with metastases but also oncologists in exploring new prognostic factors, especially those that are difficult to calculate manually.


Sign in / Sign up

Export Citation Format

Share Document