scholarly journals L2MXception: An Improved Xception Network for Classification of Peach diseases

2020 ◽  
Author(s):  
Na Yao ◽  
Fuchuan Ni ◽  
Ziyan Wang ◽  
Jun Luo ◽  
Wing-Kin Sung ◽  
...  

Abstract Background: Peach diseases can cause severe yield reduction and decreased quality for peach production. Rapid and accurate detection and identification of peach diseases is of great importance. Deep learning has been applied to detect peach diseases using imaging data. However,peach disease image data is difficult to collect and samples are imbalance. Results: This paper proposed an improved Xception network named as L2MXception which ensembles regularization term of L2-norm and mean. With the peach disease image dataset collected, results on six mainstream deep learning models were compared in details and an improved loss function was integrated with regularization term L2-norm and mean (L2M Loss). Experiments showed that the Xception model with L2M Loss outperformed the current best method for peach disease prediction. Compared to the original Xception model, the validation accuracy of L2MXception was up to 93.85%, increased by 28.48%. Conclusions: The proposed L2MXception network may have great potential in early identification of peach diseases.

2020 ◽  
Author(s):  
Na Yao ◽  
Fuchuan Ni ◽  
Ziyan Wang ◽  
Jun Luo ◽  
Wing-Kin Sung ◽  
...  

Abstract Background: Peach diseases can cause severe yield reduction and decreased quality for peach production. Rapid and accurate detection and identification of peach diseases is of great importance. Deep learning has been applied to detect peach diseases using imaging data. However, peach disease image data is difficult to collect and samples are imbalance. The popular deep networks perform poor for this issue.Results: This paper proposed an improved Xception network named as L2MXception which ensembles regularization term of L2-norm and mean. With the peach disease image dataset collected, results on seven mainstream deep learning models were compared in details and an improved loss function was integrated with regularization term L2-norm and mean (L2M Loss). Experiments showed that the Xception model with L2M Loss outperformed the current best method for peach disease prediction. Compared to the original Xception model, the validation accuracy of L2MXception was up to 93.85%, increased by 28.48%. Conclusions: The proposed L2MXception network may have great potential in early identification of peach diseases.


Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Na Yao ◽  
Fuchuan Ni ◽  
Ziyan Wang ◽  
Jun Luo ◽  
Wing-Kin Sung ◽  
...  

Abstract Background Peach diseases can cause severe yield reduction and decreased quality for peach production. Rapid and accurate detection and identification of peach diseases is of great importance. Deep learning has been applied to detect peach diseases using imaging data. However, peach disease image data is difficult to collect and samples are imbalance. The popular deep networks perform poor for this issue. Results This paper proposed an improved Xception network named as L2MXception which ensembles regularization term of L2-norm and mean. With the peach disease image dataset collected, results on seven mainstream deep learning models were compared in details and an improved loss function was integrated with regularization term L2-norm and mean (L2M Loss). Experiments showed that the Xception model with L2M Loss outperformed the current best method for peach disease prediction. Compared to the original Xception model, the validation accuracy of L2MXception was up to 93.85%, increased by 28.48%. Conclusions The proposed L2MXception network may have great potential in early identification of peach diseases.


2021 ◽  
Vol 12 ◽  
Author(s):  
Xu Xiao ◽  
Ying Qiao ◽  
Yudi Jiao ◽  
Na Fu ◽  
Wenxian Yang ◽  
...  

Highly multiplexed imaging technology is a powerful tool to facilitate understanding the composition and interactions of cells in tumor microenvironments at subcellular resolution, which is crucial for both basic research and clinical applications. Imaging mass cytometry (IMC), a multiplex imaging method recently introduced, can measure up to 100 markers simultaneously in one tissue section by using a high-resolution laser with a mass cytometer. However, due to its high resolution and large number of channels, how to process and interpret the image data from IMC remains a key challenge to its further applications. Accurate and reliable single cell segmentation is the first and a critical step to process IMC image data. Unfortunately, existing segmentation pipelines either produce inaccurate cell segmentation results or require manual annotation, which is very time consuming. Here, we developed Dice-XMBD1, a Deep learnIng-based Cell sEgmentation algorithm for tissue multiplexed imaging data. In comparison with other state-of-the-art cell segmentation methods currently used for IMC images, Dice-XMBD generates more accurate single cell masks efficiently on IMC images produced with different nuclear, membrane, and cytoplasm markers. All codes and datasets are available at https://github.com/xmuyulab/Dice-XMBD.


2021 ◽  
Author(s):  
Roberto Augusto Philippi Martins ◽  
Danilo Silva

The lack of labeled data is one of the main prohibiting issues on the development of deep learning models, as they rely on large labeled datasets in order to achieve high accuracy in complex tasks. Our objective is to evaluate the performance gain of having additional unlabeled data in the training of a deep learning model when working with medical imaging data. We present a semi-supervised learning algorithm that utilizes a teacher-student paradigm in order to leverage unlabeled data in the classification of chest X-ray images. Using our algorithm on the ChestX-ray14 dataset, we manage to achieve a substantial increase in performance when using small labeled datasets. With our method, a model achieves an AUROC of 0.822 with only 2% labeled data and 0.865 with 5% labeled data, while a fully supervised method achieves an AUROC of 0.807 with 5% labeled data and only 0.845 with 10%.


2021 ◽  
Vol 11 (16) ◽  
pp. 7412
Author(s):  
Grigorios-Aris Cheimariotis ◽  
Maria Riga ◽  
Kostas Haris ◽  
Konstantinos Toutouzas ◽  
Aggelos K. Katsaggelos ◽  
...  

Intravascular Optical Coherence Tomography (IVOCT) images provide important insight into every aspect of atherosclerosis. Specifically, the extent of plaque and its type, which are indicative of the patient’s condition, are better assessed by OCT images in comparison to other in vivo modalities. A large amount of imaging data per patient require automatic methods for rapid results. An effective step towards automatic plaque detection and plaque characterization is axial lines (A-lines) based classification into normal and various plaque types. In this work, a novel automatic method for A-line classification is proposed. The method employed convolutional neural networks (CNNs) for classification in its core and comprised the following pre-processing steps: arterial wall segmentation and an OCT-specific (depth-resolved) transformation and a post-processing step based on the majority of classifications. The important step was the OCT-specific transformation, which was based on the estimation of the attenuation coefficient in every pixel of the OCT image. The dataset used for training and testing consisted of 183 images from 33 patients. In these images, four different plaque types were delineated. The method was evaluated by cross-validation. The mean values of accuracy, sensitivity and specificity were 74.73%, 87.78%, and 61.45%, respectively, when classifying into plaque and normal A-lines. When plaque A-lines were classified into fibrolipidic and fibrocalcific, the overall accuracy was 83.47% for A-lines of OCT-specific transformed images and 74.94% for A-lines of original images. This large improvement in accuracy indicates the advantage of using attenuation coefficients when characterizing plaque types. The proposed automatic deep-learning pipeline constitutes a positive contribution to the accurate classification of A-lines in intravascular OCT images.


2019 ◽  
Vol 8 (4) ◽  
pp. 11416-11421

Batik is one of the Indonesian cultural heritages that has been recognized by the global community. Indonesian batik has a vast diversity in motifs that illustrate the philosophy of life, the ancestral heritage and also reflects the origin of batik itself. Because of the manybatik motifs, problems arise in determining the type of batik itself. Therefore, we need a classification method that can classify various batik motifs automatically based on the batik images. The technique of image classification that is used widely now is deep learning method. This technique has been proven of its capacity in identifying images in high accuracy. Architecture that is widely used for the image data analysis is Convolutional Neural Network (CNN) because this architecture is able to detect and recognize objects in an image. This workproposes to use the method of CNN and VGG architecture that have been modified to overcome the problems of classification of the batik motifs. Experiments of using 2.448 batik images from 5 classes of batik motifs showed that the proposed model has successfully achieved an accuracy of 96.30%.


2021 ◽  
Vol 15 ◽  
Author(s):  
Laura Tomaz Da Silva ◽  
Nathalia Bianchini Esper ◽  
Duncan D. Ruiz ◽  
Felipe Meneguzzi ◽  
Augusto Buchweitz

Problem: Brain imaging studies of mental health and neurodevelopmental disorders have recently included machine learning approaches to identify patients based solely on their brain activation. The goal is to identify brain-related features that generalize from smaller samples of data to larger ones; in the case of neurodevelopmental disorders, finding these patterns can help understand differences in brain function and development that underpin early signs of risk for developmental dyslexia. The success of machine learning classification algorithms on neurofunctional data has been limited to typically homogeneous data sets of few dozens of participants. More recently, larger brain imaging data sets have allowed for deep learning techniques to classify brain states and clinical groups solely from neurofunctional features. Indeed, deep learning techniques can provide helpful tools for classification in healthcare applications, including classification of structural 3D brain images. The adoption of deep learning approaches allows for incremental improvements in classification performance of larger functional brain imaging data sets, but still lacks diagnostic insights about the underlying brain mechanisms associated with disorders; moreover, a related challenge involves providing more clinically-relevant explanations from the neural features that inform classification.Methods: We target this challenge by leveraging two network visualization techniques in convolutional neural network layers responsible for learning high-level features. Using such techniques, we are able to provide meaningful images for expert-backed insights into the condition being classified. We address this challenge using a dataset that includes children diagnosed with developmental dyslexia, and typical reader children.Results: Our results show accurate classification of developmental dyslexia (94.8%) from the brain imaging alone, while providing automatic visualizations of the features involved that match contemporary neuroscientific knowledge (brain regions involved in the reading process for the dyslexic reader group and brain regions associated with strategic control and attention processes for the typical reader group).Conclusions: Our visual explanations of deep learning models turn the accurate yet opaque conclusions from the models into evidence to the condition being studied.


2020 ◽  
Vol 10 (5) ◽  
pp. 1234-1241
Author(s):  
Yongliang Zhang ◽  
Ling Li ◽  
Jia Gu ◽  
Tiexiang Wen ◽  
Qiang Xu

With the rapid development of deep learning, automatic lesion detection is widely used in clinical screening. In this paper, we make use of convolutional neural network (CNN) algorithm to help medical experts detect cervical precancerous lesion during the colposcopic screening, especially in the classification of cervical intraepithelial neoplasia (CIN). Firstly, the original image data is classified into six categories: normal, cervical cancer, mild (CIN1), moderate (CIN2), severe (CIN3) and cervicitis, which are further augmented to solve the problem of few samples of endoscopic images and non-uniformity for each category. Then, a CNN-based model is built and trained for the multi-classification of the six categories, we have added some optimization algorithms to this CNN model to make the training parameters more effective. For the test dataset, the accuracy of the proposed CNN model algorithm is 89.36%, and the area under the receiver operating characteristic (ROC) curve is 0.954. Among them, the accuracy is increased by 18%–32% compared with other traditional learning methods, which is 9%–20% higher than several commonly used deep learning models. At the same number of iterations, the time consumption of proposed algorithm is only one quarter of other deep learning models. Our study has demonstrated that cervical colposcopic image classification based on artificial intelligence has high clinical applicability, and can facilitate the early diagnosis of cervical cancer.


2019 ◽  
Vol 2019 ◽  
pp. 1-13 ◽  
Author(s):  
Yunong Tian ◽  
Guodong Yang ◽  
Zhe Wang ◽  
En Li ◽  
Zize Liang

Plant disease is one of the primary causes of crop yield reduction. With the development of computer vision and deep learning technology, autonomous detection of plant surface lesion images collected by optical sensors has become an important research direction for timely crop disease diagnosis. In this paper, an anthracnose lesion detection method based on deep learning is proposed. Firstly, for the problem of insufficient image data caused by the random occurrence of apple diseases, in addition to traditional image augmentation techniques, Cycle-Consistent Adversarial Network (CycleGAN) deep learning model is used in this paper to accomplish data augmentation. These methods effectively enrich the diversity of training data and provide a solid foundation for training the detection model. In this paper, on the basis of image data augmentation, densely connected neural network (DenseNet) is utilized to optimize feature layers of the YOLO-V3 model which have lower resolution. DenseNet greatly improves the utilization of features in the neural network and enhances the detection result of the YOLO-V3 model. It is verified in experiments that the improved model exceeds Faster R-CNN with VGG16 NET, the original YOLO-V3 model, and other three state-of-the-art networks in detection performance, and it can realize real-time detection. The proposed method can be well applied to the detection of anthracnose lesions on apple surfaces in orchards.


Sign in / Sign up

Export Citation Format

Share Document