A Novel Method of Maize Leaf Disease Image Identification Based on a Multichannel Convolutional Neural Network

2018 ◽  
Vol 61 (5) ◽  
pp. 1461-1474 ◽  
Author(s):  
Zhongqi Lin ◽  
Shaomin Mu ◽  
Aiju Shi ◽  
Chao Pang ◽  
Xiaoxiao Sun

Abstract. Traditional methods for detecting maize leaf diseases (such as leaf blight, sooty blotch, brown spot, rust, and purple leaf sheaf) are typically labor-intensive and strongly subjective. With the aim of achieving high accuracy and efficiency in the identification of maize leaf diseases from digital imagery, this article proposes a novel multichannel convolutional neural network (MCNN). The MCNN is composed of an input layer, five convolutional layers, three subsampling layers, three fully connected layers, and an output layer. Using a method that imitates human visual behavior in video saliency detection, the first and second subsampling layers are connected directly with the first fully connected layer. In addition, the mixed modes of pooling and normalization methods, rectified linear units (ReLU), and dropout are introduced to prevent overfitting and gradient diffusion. The learning process corresponding to the network structure is also illustrated. At present, there are no large-scale images of maize leaf disease for use as experimental samples. To test the proposed MCNN, 10,820 RGB images containing five types of disease were collected from maize planting areas in Shandong Province, China. The original images could not be used directly in identification experiments because of noise and irrelevant regions. They were therefore denoised and segmented by homomorphic filtering and region of interest (ROI) segmentation to construct a standard database. A series of experiments on 8 GB graphics processing units (GPUs) showed that the MCNN could achieve an average accuracy of 92.31% and a high efficiency in the identification of maize leaf diseases. The multichannel design and the integration of different innovations proved to be helpful methods for boosting performance. Keywords: Artificial intelligence, Convolutional neural network, Deep learning, Image classification, Machine learning algorithms, Maize leaf disease.

2021 ◽  
Author(s):  
S. Malliga ◽  
P. S. Nandhini ◽  
S. V. Kogilavani ◽  
R. Jaya Harini ◽  
S. Jaya Shree ◽  
...  

2019 ◽  
Vol 24 (3) ◽  
pp. 220-228
Author(s):  
Gusti Alfahmi Anwar ◽  
Desti Riminarsih

Panthera merupakan genus dari keluarga kucing yang memiliki empat spesies popular yaitu, harimau, jaguar, macan tutul, singa. Singa memiliki warna keemasan dan tidak memilki motif, harimau memiliki motif loreng dengan garis-garis panjang, jaguar memiliki tubuh yang lebih besar dari pada macan tutul serta memiliki motif tutul yang lebih lebar, sedangkan macan tutul memiliki tubuh yang sedikit lebih ramping dari pada jaguar dan memiliki tutul yang tidak terlalu lebar. Pada penelitian ini dilakukan klasifikasi genus panther yaitu harimau, jaguar, macan tutul, dan singa menggunakan metode Convolutional Neural Network. Model Convolutional Neural Network yang digunakan memiliki 1 input layer, 5 convolution layer, dan 2 fully connected layer. Dataset yang digunakan berupa citra harimau, jaguar, macan tutul, dan singa. Data training terdiri dari 3840 citra, data validasi sebanyak 960 citra, dan data testing sebanyak 800 citra. Hasil akurasi dari pelatihan model untuk training yaitu 92,31% dan validasi yaitu 81,88%, pengujian model menggunakan dataset testing mendapatan hasil 68%. Hasil akurasi prediksi didapatkan dari nilai F1-Score pada pengujian didapatkan sebesar 78% untuk harimau, 70% untuk jaguar, 37% untuk macan tutul, 74% untuk singa. Macan tutul mendapatkan akurasi terendah dibandingkan 3 hewan lainnya tetapi lebih baik dibandingkan hasil penelitian sebelumnya.


Entropy ◽  
2021 ◽  
Vol 23 (1) ◽  
pp. 119
Author(s):  
Tao Wang ◽  
Changhua Lu ◽  
Yining Sun ◽  
Mei Yang ◽  
Chun Liu ◽  
...  

Early detection of arrhythmia and effective treatment can prevent deaths caused by cardiovascular disease (CVD). In clinical practice, the diagnosis is made by checking the electrocardiogram (ECG) beat-by-beat, but this is usually time-consuming and laborious. In the paper, we propose an automatic ECG classification method based on Continuous Wavelet Transform (CWT) and Convolutional Neural Network (CNN). CWT is used to decompose ECG signals to obtain different time-frequency components, and CNN is used to extract features from the 2D-scalogram composed of the above time-frequency components. Considering the surrounding R peak interval (also called RR interval) is also useful for the diagnosis of arrhythmia, four RR interval features are extracted and combined with the CNN features to input into a fully connected layer for ECG classification. By testing in the MIT-BIH arrhythmia database, our method achieves an overall performance of 70.75%, 67.47%, 68.76%, and 98.74% for positive predictive value, sensitivity, F1-score, and accuracy, respectively. Compared with existing methods, the overall F1-score of our method is increased by 4.75~16.85%. Because our method is simple and highly accurate, it can potentially be used as a clinical auxiliary diagnostic tool.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Changming Wu ◽  
Heshan Yu ◽  
Seokhyeong Lee ◽  
Ruoming Peng ◽  
Ichiro Takeuchi ◽  
...  

AbstractNeuromorphic photonics has recently emerged as a promising hardware accelerator, with significant potential speed and energy advantages over digital electronics for machine learning algorithms, such as neural networks of various types. Integrated photonic networks are particularly powerful in performing analog computing of matrix-vector multiplication (MVM) as they afford unparalleled speed and bandwidth density for data transmission. Incorporating nonvolatile phase-change materials in integrated photonic devices enables indispensable programming and in-memory computing capabilities for on-chip optical computing. Here, we demonstrate a multimode photonic computing core consisting of an array of programable mode converters based on on-waveguide metasurfaces made of phase-change materials. The programmable converters utilize the refractive index change of the phase-change material Ge2Sb2Te5 during phase transition to control the waveguide spatial modes with a very high precision of up to 64 levels in modal contrast. This contrast is used to represent the matrix elements, with 6-bit resolution and both positive and negative values, to perform MVM computation in neural network algorithms. We demonstrate a prototypical optical convolutional neural network that can perform image processing and recognition tasks with high accuracy. With a broad operation bandwidth and a compact device footprint, the demonstrated multimode photonic core is promising toward large-scale photonic neural networks with ultrahigh computation throughputs.


Author(s):  
E. Yu. Shchetinin

The recognition of human emotions is one of the most relevant and dynamically developing areas of modern speech technologies, and the recognition of emotions in speech (RER) is the most demanded part of them. In this paper, we propose a computer model of emotion recognition based on an ensemble of bidirectional recurrent neural network with LSTM memory cell and deep convolutional neural network ResNet18. In this paper, computer studies of the RAVDESS database containing emotional speech of a person are carried out. RAVDESS-a data set containing 7356 files. Entries contain the following emotions: 0 – neutral, 1 – calm, 2 – happiness, 3 – sadness, 4 – anger, 5 – fear, 6 – disgust, 7 – surprise. In total, the database contains 16 classes (8 emotions divided into male and female) for a total of 1440 samples (speech only). To train machine learning algorithms and deep neural networks to recognize emotions, existing audio recordings must be pre-processed in such a way as to extract the main characteristic features of certain emotions. This was done using Mel-frequency cepstral coefficients, chroma coefficients, as well as the characteristics of the frequency spectrum of audio recordings. In this paper, computer studies of various models of neural networks for emotion recognition are carried out on the example of the data described above. In addition, machine learning algorithms were used for comparative analysis. Thus, the following models were trained during the experiments: logistic regression (LR), classifier based on the support vector machine (SVM), decision tree (DT), random forest (RF), gradient boosting over trees – XGBoost, convolutional neural network CNN, recurrent neural network RNN (ResNet18), as well as an ensemble of convolutional and recurrent networks Stacked CNN-RNN. The results show that neural networks showed much higher accuracy in recognizing and classifying emotions than the machine learning algorithms used. Of the three neural network models presented, the CNN + BLSTM ensemble showed higher accuracy.


Author(s):  
xu chen ◽  
Shibo Wang ◽  
Houguang Liu ◽  
Jianhua Yang ◽  
Songyong Liu ◽  
...  

Abstract Many data-driven coal gangue recognition (CGR) methods based on the vibration or sound of collapsed coal and gangue have been proposed to achieve automatic CGR, which is important for realizing intelligent top-coal caving. However, the strong background noise and complex environment in underground coal mines render this task challenging in practical applications. Inspired by the fact that workers distinguish coal and gangue from underground noise by listening to the hydraulic support sound, we propose an auditory model based CGR method that simulates human auditory recognition by combining an auditory spectrogram with a convolutional neural network (CNN). First, we adjust the characteristic frequency (CF) distribution of the auditory peripheral model (APM) based on the spectral characteristics of collapsed sound signals from coal and gangue and then process the sound signals using the adjusted APM to obtain inferior colliculus auditory signals with multiple CFs. Subsequently, the auditory signals of all CFs are converted into gray images separately and then concatenated into a multichannel auditory spectrum along the channel dimension. Finally, we input the multichannel auditory spectrum as a feature map to the two-dimensional CNN, whose convolutional layers are used to automatically extract features, and the fully connected layer and softmax layer are used to flatten features and predict the recognition result, respectively. The CNN is optimized for the CGR based on a comparison study of four typical types of CNN structures with different network training hyperparameters. The experimental results show that this method affords an accurate CGR with a recognition accuracy of 99.5%. Moreover, this method offers excellent noise immunity compared with typically used CGR methods under various noisy conditions.


Sign in / Sign up

Export Citation Format

Share Document