scholarly journals Leaf to panicle ratio (LPR): a new physiological trait indicative of source and sink relation in japonica rice based on deep learning

2020 ◽  
Author(s):  
Zongfeng Yang ◽  
Shang Gao ◽  
Feng Xiao ◽  
Ganghua Li ◽  
Yangfeng Ding ◽  
...  

Abstract Background : Identification and characterization of new traits with a sound physiological foundation is essential for crop breeding and management. Deep learning has been widely used in image data analysis to explore spatial and temporal information on crop growth and development, thus strengthening the power of the identification of physiological traits. This study aims to develop a novel trait that indicates source and sink relation in japonica rice based on deep learning.Results : We applied a deep learning approach to accurately segment leaf and panicle and subsequently developed the procedure of GvCrop to calculate the leaf to panicle ratio (LPR) of rice populations during grain filling. Images of the training dataset were captured in the field experiments, with large variations in camera shooting angle, the elevation angle and the azimuth angle of the sun, rice genotype, and plant phenological stages. Accurately labeled by manually annotating all the panicle and leaf regions, the resulting dataset were used to train FPN-Mask (Feature Pyramid Network Mask) models, consisting of a backbone network and a task-specific sub-network. The model with the highest accuracy is then selected to study the variations in LPR among 192 rice germplasms and among agronomical practices. Despite the challenging field conditions, FPN-Mask models achieved a high detection accuracy, with Pixel Accuracy being 0.99 for panicles and 0.98 for leaves. The calculated LPRs showed large spatial and temporal variations as well as genotypic differences.Conclusion : Deep learning techniques can achieve high accuracy in simultaneously detecting panicle and leaf data from complex rice field images. The proposed FPN-Mask model is applicable for detecting and quantifying crop performance under field conditions. The newly identified trait of LPR should provide a high throughput protocol for breeders to select superior rice cultivars as well as for agronomists to precisely manage field crops that have a good balance of source and sink.

2020 ◽  
Author(s):  
Zongfeng Yang ◽  
Shang Gao ◽  
Feng Xiao ◽  
Ganghua Li ◽  
Yangfeng Ding ◽  
...  

Abstract Background : Identification and characterization of new traits with a sound physiological foundation is essential for crop breeding and management. Deep learning has been widely used in image data analysis to explore spatial and temporal information on crop growth and development, thus strengthening the power of the identification of physiological traits. This study aims to develop a novel trait that indicates source and sink relation in japonica rice based on deep learning. Results : We applied a deep learning approach to accurately segment leaf and panicle and subsequently developed the procedure of GvCrop to calculate the leaf to panicle ratio (LPR) of rice populations during grain filling. Images of the training dataset were captured in the field experiments, with large variations in camera shooting angle, the elevation angle and the azimuth angle of the sun, rice genotype, and plant phenological stages. Accurately labeled by manually annotating all the panicle and leaf regions, the resulting dataset were used to train FPN-Mask (Feature Pyramid Network Mask) models, consisting of a backbone network and a task-specific sub-network. The model with the highest accuracy is then selected to study the variations in LPR among 192 rice germplasms and among agronomical practices. Despite the challenging field conditions, FPN-Mask models achieved a high detection accuracy, with Pixel Accuracy being 0.99 for panicles and 0.98 for leaves. The calculated LPRs showed large spatial and temporal variations as well as genotypic differences. Conclusion : Deep learning techniques can achieve high accuracy in simultaneously detecting panicle and leaf data from complex rice field images. The proposed FPN-Mask model is applicable for detecting and quantifying crop performance under field conditions. The newly identified trait of LPR should provide a high throughput protocol for breeders to select superior rice cultivars as well as for agronomists to precisely manage field crops that have a good balance of source and sink.


2020 ◽  
Author(s):  
Zongfeng Yang ◽  
Shang Gao ◽  
Feng Xiao ◽  
Ganghua Li ◽  
Yangfeng Ding ◽  
...  

Abstract Background: Identification and characterization of new traits with a sound physiological foundation for crop breeding and management is one of the primary objectives for crop physiology. New technological advances in high throughput phenotyping have strengthened the power of physiological breeding. Methods for data mining of the big data acquired by various phenotyping platforms are developed, among which deep learning is used in image data analysis to explore spatial and temporal information concerning crop growth and development. However, method development is still necessary to enable simultaneous extraction of both leaf and panicle data from the complex field backgrounds, as required by the breeder for the adoption of physiological strategies to balance source and sink for yield improvement.Results: We applied a deep learning approach to accurately extract leaf and panicle data and subsequently developed the procedure of GvCrop to calculate the leaf to panicle ratio (LPR) of rice populations during grain filling. Images of the training data set were captured in the field experiments, with large variations in camera shooting angle, the elevation angle and the azimuth angle of the sun, rice genotype, and plant phenological stages. Accurately labeled by manually annotating all the panicle and leaf regions, the resulting dataset were used to train FPN-Mask models, consisting of a backbone network and a task-specific sub-network. The model with the highest accuracy is then selected to study the variations in LPR among 192 rice germplasms and among agronomical practices. Despite the challenging field conditions, FPN-Mask models achieved a high detection accuracy, with Pixel Accuracy being 0.99 for panicles and 0.98 for leaves. The calculated LPRs showed large spatial and temporal variations as well as genotypic differences. Conclusion: Deep learning techniques can achieve high accuracy in simultaneously detecting panicle and leaf data from complex rice field images. The proposed FPN-Mask model is applicable to detect and quantify crop performance under field conditions. The proposed trait of LPR should provide a high throughput protocol for breeders to select superior rice cultivars as well as for agronomists to precisely manage field crops that have a good balance of source and sink.


2021 ◽  
Author(s):  
Sung Hyun Noh ◽  
Chansik An ◽  
Dain Kim ◽  
Seung Hyun Lee ◽  
Min-Yung Chang ◽  
...  

Abstract Background A computer algorithm that automatically detects sacroiliac joint abnormalities on plain radiograph would help radiologists avoid missing sacroiliitis. This study aimed to develop and validate a deep learning model to detect and diagnose sacroiliitis on plain radiograph in young patients with low back pain. Methods This Institutional Review Board-approved retrospective study included 478 and 468 plain radiographs from 241 and 433 young (< 40 years) patients who complained of low back pain with and without ankylosing spondylitis, respectively. They were randomly split into training and test datasets with a ratio of 8:2. Radiologists reviewed the images and labeled the coordinates of a bounding box and determined the presence or absence of sacroiliitis for each sacroiliac joint. We fine-tined and optimized the EfficientDet-D4 object detection model pre-trained on the COCO 2107 dataset on the training dataset and validated the final model on the test dataset. Results The mean average precision, an evaluation metric for object detection accuracy, was 0.918 at 0.5 intersection over union. In the diagnosis of sacroiliitis, the area under the curve, sensitivity, specificity, accuracy, and F1-score were 0.932 (95% confidence interval, 0.903–0.961), 96.9% (92.9–99.0), 86.8% (81.5–90.9), 91.1% (87.7–93.7), and 90.2% (85.0–93.9), respectively. Conclusions The EfficientDet, a deep learning-based object detection algorithm, could be used to automatically diagnose sacroiliitis on plain radiograph.


Author(s):  
Dima M. Alalharith ◽  
Hajar M. Alharthi ◽  
Wejdan M. Alghamdi ◽  
Yasmine M. Alsenbel ◽  
Nida Aslam ◽  
...  

Computer-based technologies play a central role in the dentistry field, as they present many methods for diagnosing and detecting various diseases, such as periodontitis. The current study aimed to develop and evaluate the state-of-the-art object detection and recognition techniques and deep learning algorithms for the automatic detection of periodontal disease in orthodontic patients using intraoral images. In this study, a total of 134 intraoral images were divided into a training dataset (n = 107 [80%]) and a test dataset (n = 27 [20%]). Two Faster Region-based Convolutional Neural Network (R-CNN) models using ResNet-50 Convolutional Neural Network (CNN) were developed. The first model detects the teeth to locate the region of interest (ROI), while the second model detects gingival inflammation. The detection accuracy, precision, recall, and mean average precision (mAP) were calculated to verify the significance of the proposed model. The teeth detection model achieved an accuracy, precision, recall, and mAP of 100 %, 100%, 51.85%, and 100%, respectively. The inflammation detection model achieved an accuracy, precision, recall, and mAP of 77.12%, 88.02%, 41.75%, and 68.19%, respectively. This study proved the viability of deep learning models for the detection and diagnosis of gingivitis in intraoral images. Hence, this highlights its potential usability in the field of dentistry and aiding in reducing the severity of periodontal disease globally through preemptive non-invasive diagnosis.


Author(s):  
Mohd Najib Ahmad ◽  
Abdul Rashid Mohamed Shariff ◽  
Ishak Aris ◽  
Izhal Abdul Halin ◽  
Ramle Moslim

The bagworm species of Metisa plana, is one of the major species of leaf-eating insect pest that attack oil palm in Peninsular Malaysia. Without any treatment, this situation may cause 43% yield loss from a moderate attack. In 2020, the economic loss due to bagworm attacks was recorded at around RM 180 million. Based on this scenario, it is necessary to closely monitor the bagworm outbreak at  infested areas. Accuracy and precise data collection is debatable, due to human errors. . Hence, the objective of this study is to design and develop a specific machine vision that incorporates an image processing algorithm according to its functional modes. In this regard, a device, the Automated Bagworm Counter or Oto-BaCTM is the first in the world to be developed with an embedded software that is based on the use of a graphic processing unit computation and a TensorFlow/Teano library setup for the trained dataset. The technology is based on the developed deep learning with Faster Regions with Convolutional Neural Networks technique towards real time object detection. The Oto-BaCTM uses an ordinary camera. By using self-developed deep learning algorithms, a motion-tracking and false colour analysis were applied to detect and count number of living and dead larvae and pupae population per frond, respectively, corresponding to three major groups or sizes classification. Initially, in the first trial, the Oto-BaCTM has resulted in low percentages of detection accuracy for the living and dead G1 larvae (47.0% & 71.7%), G2 larvae (39.1 & 50.0%) and G3 pupae (30.1% & 20.9%). After some improvements on the training dataset, the percentages increased in the next field trial, with amounts of 40.5% and 7.0% for the living and dead G1 larvae, 40.1% and 29.2% for the living and dead G2 larvae and 47.7% and 54.6% for the living and dead pupae. The development of the ground-based device is the pioneer in the oil palm industry, in which it reduces human errors when conducting census while promoting precision agriculture practice.


2021 ◽  
Vol 8 ◽  
Author(s):  
Mohamed Elgendi ◽  
Muhammad Umer Nasir ◽  
Qunfeng Tang ◽  
David Smith ◽  
John-Paul Grenier ◽  
...  

Chest X-ray imaging technology used for the early detection and screening of COVID-19 pneumonia is both accessible worldwide and affordable compared to other non-invasive technologies. Additionally, deep learning methods have recently shown remarkable results in detecting COVID-19 on chest X-rays, making it a promising screening technology for COVID-19. Deep learning relies on a large amount of data to avoid overfitting. While overfitting can result in perfect modeling on the original training dataset, on a new testing dataset it can fail to achieve high accuracy. In the image processing field, an image augmentation step (i.e., adding more training data) is often used to reduce overfitting on the training dataset, and improve prediction accuracy on the testing dataset. In this paper, we examined the impact of geometric augmentations as implemented in several recent publications for detecting COVID-19. We compared the performance of 17 deep learning algorithms with and without different geometric augmentations. We empirically examined the influence of augmentation with respect to detection accuracy, dataset diversity, augmentation methodology, and network size. Contrary to expectation, our results show that the removal of recently used geometrical augmentation steps actually improved the Matthews correlation coefficient (MCC) of 17 models. The MCC without augmentation (MCC = 0.51) outperformed four recent geometrical augmentations (MCC = 0.47 for Data Augmentation 1, MCC = 0.44 for Data Augmentation 2, MCC = 0.48 for Data Augmentation 3, and MCC = 0.49 for Data Augmentation 4). When we retrained a recently published deep learning without augmentation on the same dataset, the detection accuracy significantly increased, with a χMcNemar′s statistic2=163.2 and a p-value of 2.23 × 10−37. This is an interesting finding that may improve current deep learning algorithms using geometrical augmentations for detecting COVID-19. We also provide clinical perspectives on geometric augmentation to consider regarding the development of a robust COVID-19 X-ray-based detector.


2021 ◽  
Author(s):  
Amran Hossain ◽  
Mohammad Tariqul Islam ◽  
Ali F. Almutairi

Abstract Automated classification and detection of brain abnormalities like tumors from microwave head images is essential for investigating and monitoring disease progression. This paper presents the automatic classification and detection of human brain abnormalities through the deep learning-based YOLOv5 model in microwave head images. The YOLOv5 is a faster object detection model, which has a less computational architecture with high accuracy. At the beginning, the backscattered signals are collected from the implemented 3D wideband nine antennas array-based microwave head imaging (MWHI) system, where one antenna operates as a transmitter and the remaining eight antennas operate as receivers. In this research, fabricated tissue-mimicking head phantom with a benign and malignant tumor as brain abnormalities, which is utilized in MWHI system. Afterwards, the M-DMAS (modified-delay-multiply-and-sum) imaging algorithm is applied on the post-processed scattering parameters to reconstruct head regions image with 640×640 pixels. Three hundred sample images are collected, including benign and malignant tumors from various locations in head regions by the MWHI system. Later, the images are preprocessed and augmented to create a final image dataset containing 3600 images, and then used for training, validation, and testing the YOLOv5 model. Subsequently, 80% of images are utilized for training, and 20% are used for testing the model. Then from the 80% training dataset, 20% is utilized for validation to avoid overfitting. The brain abnormalities classification and detection performances with various datasets are investigated by the YOLOv5s, YOLOv5m, and YOLOv5l models of YOLOv5. It is investigated that the YOLOv5l model showed the best result for abnormalities classification and detection compared to other models. However, the achieved training accuracy, validation loss, precision, recall, F1-score, training and validation classification loss, and mean average precision (mAP) are 99.84%, 9.38%, 93.20%, 94.80%, 94.01%, 0.004, 0.0133, and 96.20% respectively for the YOLOv5l model, which ensures the better classification and detection accuracy of the model. Finally, a testing dataset with different scenarios is evaluated through the three versions of the YOLOv5 model, and conclude that brain abnormalities classification and detection with location are successfully classified and detected. Thus, the deep model is applicable in the portable MWHI system.


Energies ◽  
2021 ◽  
Vol 14 (12) ◽  
pp. 3650
Author(s):  
Zhe Yan ◽  
Zheng Zhang ◽  
Shaoyong Liu

Fault interpretation is an important part of seismic structural interpretation and reservoir characterization. In the conventional approach, faults are detected as reflection discontinuity or abruption and are manually tracked in post-stack seismic data, which is time-consuming. In order to improve efficiency, a variety of automatic fault detection methods have been proposed, among which widespread attention has been given to deep learning-based methods. However, deep learning techniques require a large amount of marked seismic samples as a training dataset. Although the amount of synthetic seismic data can be guaranteed and the labels are accurate, the difference between synthetic data and real data still exists. To overcome this drawback, we apply a transfer learning strategy to improve the performance of automatic fault detection by deep learning methods. We first pre-train a deep neural network with synthetic seismic data. Then we retrain the network with real seismic samples. We use a random sample consensus (RANSAC) method to obtain real seismic samples and generate corresponding labels automatically. Three real 3D examples are included to demonstrate that the fault detection accuracy of the pre-trained network models can be greatly improved by retraining the network with a few amount of real seismic samples.


2021 ◽  
Vol 64 (3) ◽  
pp. 919-927
Author(s):  
Dujin Wang ◽  
Yizhong Wang ◽  
Ming Li ◽  
Xinting Yang ◽  
Jianwei Wu ◽  
...  

HighlightsThe proposed method detected thrips and whitefly more accurately than previous methods.The proposed method demonstrated good robustness to illumination reflections and different pest densities.Small pest detection was improved by adding large-scale feature maps and more residual units to a shallow network.Machine vision and deep learning created an end-to-end model to detect small pests on sticky traps in field conditions.Abstract. Pest detection is the basis of precise control in vegetable greenhouses. To improve the detection accuracy and robustness for two common small pests (whitefly and thrips) in greenhouses, this study proposes a novel small object detection approach based on the YOLOv4 model. Yellow sticky trap (YST) images at the original resolution (2560 × 1920 pixels) were collected using pest monitoring equipment in a greenhouse. The images were then cropped and labeled to create sub-images (416 × 416 pixels) to construct an experimental dataset. The labeled images used in this study (900 training, 100 validation, and 200 test) are available for comparative studies. To enhance the model’s ability to detect small pests, the feature map at the 8-fold downsampling layer in the backbone network was merged with the feature map at the 4-fold downsampling layer to generate a new layer and output a feature map with a size of 104 × 104 pixels. Furthermore, the residual units in the first two residual blocks were enlarged by four times to extract more shallow image features and the location information of target pests to withstand image degradation in the field. The experimental results showed that the mean average precision (mAP) for detection of whitefly and thrips using the proposed approach was improved by 8.2% and 3.4% compared with the YOLOv3 and YOLOv4 models, respectively. The detection performance slightly decreased as the pest densities increased in the YST image, but the mAP value was still 92.7% in the high-density dataset, which indicates that the proposed model has good robustness over a range of pest densities. Compared with previous similar studies, the proposed method has better potential to monitor whitefly and thrips using YSTs in field conditions. Keywords: Deep learning, Greenhouse pest management, Image processing, Pest detection, Small object, YOLOv4.


Sign in / Sign up

Export Citation Format

Share Document