scholarly journals Web-SpikeSegNet: Deep Learning Framework for Recognition and Counting of Spikes from Visual Images of Wheat Plants

Author(s):  
Tanuj Misra ◽  
Alka Arora ◽  
Sudeep Marwaha ◽  
Ranjeet Ranjan Jha ◽  
Mrinmoy Ray ◽  
...  

Abstract Background: Computer vision with deep-learning is emerging as a major approach for non-invasive and non-destructive plant phenotyping. Spikes are the reproductive organs of wheat plants. Detection of spike helps in identifying heading, and counting of the spikes as well as area of the spikes will be useful for determination of the yield of wheat plant. Hence detection and counting of spikes which is considered as the grain bearing organ, has great importance in the phenomics study of large sets of germplasms. Results: In the present study, we developed an online platform “Web-SpikeSegNet” based on a deep-learning framework for spike detection and counting from the wheat plant’s visual images. This platform implements the “SpikeSegNet” approach developed by Misra et al.(2020), which has proved as an effective and robust approach for spike detection and counting. Architecture of the Web-SpikeSegNet consists of 2 layers. First Layer, Client Side Interface Layer, deals with deals with end user’s requests and its corresponding responses management while the second layer, Server Side Application Layer consisting of spike detection and counting module. The backbone of the spike detection module comprises of deep encoder-decoder network with hourglass for spike segmentation. Spike counting module implements the “Analyze Particle” function of imageJ to count the number of spikes. For evaluating the performance of Web-SpikeSegNet, we acquired wheat plant’s visual images using LemnaTec imaging platform installed at Nanaji Deshmukh Plant Phenomics Centre, ICAR-Indian Agricultural Research Institute, New Delhi, India and the satisfactory segmentation performances were obtained as Type I error 0.00159, Type II error 0.0586, Accuracy 99.65%, Precision 99.59% and F1 score 99.65%. Conclusions: In this study, freely available web-based software has been developed based on combined digital image analysis and deep learning techniques. As spike detection and counting in wheat phenotyping are closely related to the yield, Web-SpikeSegNet is a significant step forward in the field of wheat phenotyping and will be very useful to the researchers and students working in the domain.

2019 ◽  
Vol 2019 ◽  
pp. 1-14 ◽  
Author(s):  
Sambuddha Ghosal ◽  
Bangyou Zheng ◽  
Scott C. Chapman ◽  
Andries B. Potgieter ◽  
David R. Jordan ◽  
...  

The yield of cereal crops such as sorghum (Sorghum bicolor L. Moench) depends on the distribution of crop-heads in varying branching arrangements. Therefore, counting the head number per unit area is critical for plant breeders to correlate with the genotypic variation in a specific breeding field. However, measuring such phenotypic traits manually is an extremely labor-intensive process and suffers from low efficiency and human errors. Moreover, the process is almost infeasible for large-scale breeding plantations or experiments. Machine learning-based approaches like deep convolutional neural network (CNN) based object detectors are promising tools for efficient object detection and counting. However, a significant limitation of such deep learning-based approaches is that they typically require a massive amount of hand-labeled images for training, which is still a tedious process. Here, we propose an active learning inspired weakly supervised deep learning framework for sorghum head detection and counting from UAV-based images. We demonstrate that it is possible to significantly reduce human labeling effort without compromising final model performance (R2 between human count and machine count is 0.88) by using a semitrained CNN model (i.e., trained with limited labeled data) to perform synthetic annotation. In addition, we also visualize key features that the network learns. This improves trustworthiness by enabling users to better understand and trust the decisions that the trained deep learning model makes.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Tanuj Misra ◽  
Alka Arora ◽  
Sudeep Marwaha ◽  
Ranjeet Ranjan Jha ◽  
Mrinmoy Ray ◽  
...  

Materials ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 1957
Author(s):  
Yang Jin

Accurate and automatic railhead inspection is crucial for the operational safety of railway systems. Deep learning on visual images is effective in the automatic detection of railhead defects, but either intensive data requirements or ignoring defect sizes reduce its applicability. This paper developed a machine learning framework based on wavelet scattering networks (WSNs) and neural networks (NNs) for identifying railhead defects. WSNs are functionally equivalent to deep convolutional neural networks while containing no parameters, thus suitable for non-intensive datasets. NNs can restore location and size information. The publicly available rail surface discrete defects (RSDD) datasets were analyzed, including 67 Type-I railhead images acquired from express tracks and 128 Type-II images captured from ordinary/heavy haul tracks. The ultimate validation accuracy reached 99.80% and 99.44%, respectively. WSNs can extract implicit signal features, and the support vector machine classifier can improve the learning accuracy of NNs by over 6%. Three criteria, namely the precision, recall, and F-measure, were calculated for comparison with the literature. At the pixel level, the developed approach achieved three criteria of around 90%, outperforming former methods. At the defect level, the recall rates reached 100%, indicating all labeled defects were identified. The precision rates were around 75%, affected by the insignificant misidentified speckles (smaller than 20 pixels). Nonetheless, the developed learning framework was effective in identifying railhead defects.


2005 ◽  
Vol 52 (4) ◽  
pp. 369-379
Author(s):  
B. G. Shivakumar ◽  
B. N. Mishra ◽  
R. C. Gautam

A field experiment on a greengram-wheat cropping sequence was carried out under limited water supply conditions in 1997-98 and 1998-99 at the farm of the Indian Agricultural Research Institute, New Delhi. The greengram was sown either on flat beds or on broad beds 2 m in width, divided by furrows, with 0, 30 and 60 kg P2O5/ha. After the harvest of greengram pods, wheat was grown in the same plots, either with the greengram stover removed or with the stover incorporated along with 0, 40, 80 and 120 kg N/ha applied to wheat. The grain yield of greengram was higher when sown on broad beds with furrows compared to flat bed sowing, and the application of 30 or 60 kg P2O5/ha resulted in significantly higher grain yields compared to no phosphorus application. The combination of broad bed and furrows with phosphorus fertilization was found to be ideal for achieving higher productivity in greengram. The land configuration treatments had no impact on the productivity of wheat. The application of phosphorus to the preceding crop had a significant residual effect on the grain yield of wheat. The incorporation of greengram stover also significantly increased the grain yield of wheat. The increasing levels of N increased the grain yield of wheat significantly up to 80 kg/ha. The combination of greengram stover incorporation and 80 kg N/ha applied to wheat significantly increased the grain yield. Further, there was a significant interaction effect between the phosphorus applied to the preceding crop and N levels given to wheat on the grain yield of wheat.


2020 ◽  
Author(s):  
Raniyaharini R ◽  
Madhumitha K ◽  
Mishaa S ◽  
Virajaravi R

2020 ◽  
Author(s):  
Jinseok Lee

BACKGROUND The coronavirus disease (COVID-19) has explosively spread worldwide since the beginning of 2020. According to a multinational consensus statement from the Fleischner Society, computed tomography (CT) can be used as a relevant screening tool owing to its higher sensitivity for detecting early pneumonic changes. However, physicians are extremely busy fighting COVID-19 in this era of worldwide crisis. Thus, it is crucial to accelerate the development of an artificial intelligence (AI) diagnostic tool to support physicians. OBJECTIVE We aimed to quickly develop an AI technique to diagnose COVID-19 pneumonia and differentiate it from non-COVID pneumonia and non-pneumonia diseases on CT. METHODS A simple 2D deep learning framework, named fast-track COVID-19 classification network (FCONet), was developed to diagnose COVID-19 pneumonia based on a single chest CT image. FCONet was developed by transfer learning, using one of the four state-of-art pre-trained deep learning models (VGG16, ResNet50, InceptionV3, or Xception) as a backbone. For training and testing of FCONet, we collected 3,993 chest CT images of patients with COVID-19 pneumonia, other pneumonia, and non-pneumonia diseases from Wonkwang University Hospital, Chonnam National University Hospital, and the Italian Society of Medical and Interventional Radiology public database. These CT images were split into a training and a testing set at a ratio of 8:2. For the test dataset, the diagnostic performance to diagnose COVID-19 pneumonia was compared among the four pre-trained FCONet models. In addition, we tested the FCONet models on an additional external testing dataset extracted from the embedded low-quality chest CT images of COVID-19 pneumonia in recently published papers. RESULTS Of the four pre-trained models of FCONet, the ResNet50 showed excellent diagnostic performance (sensitivity 99.58%, specificity 100%, and accuracy 99.87%) and outperformed the other three pre-trained models in testing dataset. In additional external test dataset using low-quality CT images, the detection accuracy of the ResNet50 model was the highest (96.97%), followed by Xception, InceptionV3, and VGG16 (90.71%, 89.38%, and 87.12%, respectively). CONCLUSIONS The FCONet, a simple 2D deep learning framework based on a single chest CT image, provides excellent diagnostic performance in detecting COVID-19 pneumonia. Based on our testing dataset, the ResNet50-based FCONet might be the best model, as it outperformed other FCONet models based on VGG16, Xception, and InceptionV3.


Sign in / Sign up

Export Citation Format

Share Document