Discrimination of Pepper Seed Varieties by Multispectral Imaging Combined with Machine Learning

2020 ◽  
Vol 36 (5) ◽  
pp. 743-749
Author(s):  
Xingwang Li ◽  
Xiaofei Fan ◽  
Lili Zhao ◽  
Sheng Huang ◽  
Yi He ◽  
...  

HighlightsThis study revealed the feasibility of to classify pepper seed varieties using multispectral imaging combined with one-dimensional convolutional neural network (1D-CNN).Convolutional neural networks were adopted to develop models for prediction of seed varieties, and the performance was compared with KNN and SVM.In this experiment, the classification effect of the SVM classification model is the best, but the 1D-CNN classification model is relatively easy to implement.Abstract. When non-seed materials are mixed in seeds or seed varieties of low value are mixed in high value varieties, it will cause losses to growers or businesses. Thus, the successful discrimination of seed varieties is critical for improvement of seed ralue. In recent years, convolutional neural networks (CNNs) have been used in classification of seed varieties. The feasibility of using multispectral imaging combined with one-dimensional convolutional neural network (1D-CNN) to classify pepper seed varieties was studied. The total number of three varieties of samples was 1472, and the average spectral curve between 365nm and 970nm of the three varieties was studied. The data were analyzed using full bands of the spectrum or the feature bands selected by successive projection algorithm (SPA). SPA extracted 9 feature bands from 19 bands (430, 450, 470, 490, 515, 570, 660, 780, and 880 nm). The classification accuracy of the three classification models developed with full band using K nearest neighbors (KNN), support vector machine (SVM), and 1D-CNN were 85.81%, 97.70%, and 90.50%, respectively. With full bands, SVM and 1D-CNN performed significantly better than KNN, and SVM performed slightly better than 1D-CNN. With feature bands, the testing accuracies of SVM and 1D-CNN were 97.30% and 92.6%, respectively. Although the classification accuracy of 1D-CNN was not the highest, the ease of operation made it the most feasible method for pepper seed variety prediction. Keywords: Multispectral imaging, One-dimensional convolutional neural network, Pepper seed, Variety classification.

2019 ◽  
Vol 8 (4) ◽  
pp. 160 ◽  
Author(s):  
Bingxin Liu ◽  
Ying Li ◽  
Guannan Li ◽  
Anling Liu

Spectral characteristics play an important role in the classification of oil film, but the presence of too many bands can lead to information redundancy and reduced classification accuracy. In this study, a classification model that combines spectral indices-based band selection (SIs) and one-dimensional convolutional neural networks was proposed to realize automatic oil films classification using hyperspectral remote sensing images. Additionally, for comparison, the minimum Redundancy Maximum Relevance (mRMR) was tested for reducing the number of bands. The support vector machine (SVM), random forest (RF), and Hu’s convolutional neural networks (CNN) were trained and tested. The results show that the accuracy of classifications through the one dimensional convolutional neural network (1D CNN) models surpassed the accuracy of other machine learning algorithms such as SVM and RF. The model of SIs+1D CNN could produce a relatively higher accuracy oil film distribution map within less time than other models.


2021 ◽  
Vol 5 (3) ◽  
pp. 584-593
Author(s):  
Naufal Hilmiaji ◽  
Kemas Muslim Lhaksmana ◽  
Mahendra Dwifebri Purbolaksono

especially with the advancement of deep learning methods for text classification. Despite some effort to identify emotion on Indonesian tweets, its performance evaluation results have not achieved acceptable numbers. To solve this problem, this paper implements a classification model using a convolutional neural network (CNN), which has demonstrated expected performance in text classification. To easily compare with the previous research, this classification is performed on the same dataset, which consists of 4,403 tweets in Indonesian that were labeled using five different emotion classes: anger, fear, joy, love, and sadness. The performance evaluation results achieve the precision, recall, and F1-score at respectively 90.1%, 90.3%, and 90.2%, while the highest accuracy achieves 89.8%. These results outperform previous research that classifies the same classification on the same dataset.


2020 ◽  
Author(s):  
Yakoop Razzaz Hamoud Qasim ◽  
Habeb Abdulkhaleq Mohammed Hassan ◽  
Abdulelah Abdulkhaleq Mohammed Hassan

In this paper we present a Convolutional Neural Network consisting of NASNet and MobileNet in parallel (concatenation) to classify three classes COVID-19, normal and pneumonia, depending on a dataset of 1083 x-ray images divided into 361 images for each class. VGG16 and RESNet152-v2 models were also prepared and trained on the same dataset to compare performance of the proposed model with their performance. After training the networks and evaluating their performance, an overall accuracy of 96.91%for the proposed model, 92.59% for VGG16 model and 94.14% for RESNet152. We obtained accuracy, sensitivity, specificity and precision of 99.69%, 99.07%, 100% and 100% respectively for the proposed model related to the COVID-19 class. These results were better than the results of other models. The conclusion, neural networks are built from models in parallel are most effective when the data available for training are small and the features of different classes are similar.


2021 ◽  
Vol 65 (1) ◽  
pp. 11-22
Author(s):  
Mengyao Lu ◽  
Shuwen Jiang ◽  
Cong Wang ◽  
Dong Chen ◽  
Tian’en Chen

HighlightsA classification model for the front and back sides of tobacco leaves was developed for application in industry.A tobacco leaf grading method that combines a CNN with double-branch integration was proposed.The A-ResNet network was proposed and compared with other classic CNN networks.The grading accuracy of eight different grades was 91.30% and the testing time was 82.180 ms, showing a relatively high classification accuracy and efficiency.Abstract. Flue-cured tobacco leaf grading is a key step in the production and processing of Chinese-style cigarette raw materials, directly affecting cigarette blend and quality stability. At present, manual grading of tobacco leaves is dominant in China, resulting in unsatisfactory grading quality and consuming considerable material and financial resources. In this study, for fast, accurate, and non-destructive tobacco leaf grading, 2,791 flue-cured tobacco leaves of eight different grades in south Anhui Province, China, were chosen as the study sample, and a tobacco leaf grading method that combines convolutional neural networks and double-branch integration was proposed. First, a classification model for the front and back sides of tobacco leaves was trained by transfer learning. Second, two processing methods (equal-scaled resizing and cropping) were used to obtain global images and local patches from the front sides of tobacco leaves. A global image-based tobacco leaf grading model was then developed using the proposed A-ResNet-65 network, and a local patch-based tobacco leaf grading model was developed using the ResNet-34 network. These two networks were compared with classic deep learning networks, such as VGGNet, GoogLeNet-V3, and ResNet. Finally, the grading results of the two grading models were integrated to realize tobacco leaf grading. The tobacco leaf classification accuracy of the final model, for eight different grades, was 91.30%, and grading of a single tobacco leaf required 82.180 ms. The proposed method achieved a relatively high grading accuracy and efficiency. It provides a method for industrial implementation of the tobacco leaf grading and offers a new approach for the quality grading of other agricultural products. Keywords: Convolutional neural network, Deep learning, Image classification, Transfer learning, Tobacco leaf grading


Author(s):  
Sachin B. Jadhav

<span lang="EN-US">Plant pathologists desire soft computing technology for accurate and reliable diagnosis of plant diseases. In this study, we propose an efficient soybean disease identification method based on a transfer learning approach by using a pre-trained convolutional neural network (CNN’s) such as AlexNet, GoogleNet, VGG16, ResNet101, and DensNet201. The proposed convolutional neural networks were trained using 1200 plant village image dataset of diseased and healthy soybean leaves, to identify three soybean diseases out of healthy leaves. Pre-trained CNN used to enable a fast and easy system implementation in practice. We used the five-fold cross-validation strategy to analyze the performance of networks. In this study, we used a pre-trained convolutional neural network as feature extractors and classifiers. The experimental results based on the proposed approach using pre-trained AlexNet, GoogleNet, VGG16, ResNet101, and DensNet201 networks achieve an accuracy of 95%, 96.4 %, 96.4 %, 92.1%, 93.6% respectively. The experimental results for the identification of soybean diseases indicated that the proposed networks model achieves the highest accuracy</span>


2021 ◽  
Author(s):  
Shima Baniadamdizaj ◽  
Mohammadreza Soheili ◽  
Azadeh Mansouri

Abstract Today integration of facts from virtual and paper files may be very vital for the expertise control of efficient. This calls for the record to be localized at the photograph. Several strategies had been proposed to resolve this trouble; however, they may be primarily based totally on conventional photograph processing strategies that aren't sturdy to intense viewpoints and backgrounds. Deep Convolutional Neural Networks (CNNs), on the opposite hand, have demonstrated to be extraordinarily sturdy to versions in history and viewing attitude for item detection and classification responsibilities. We endorse new utilization of Neural Networks (NNs) for the localization trouble as a localization trouble. The proposed technique ought to even localize photos that don't have a very square shape. Also, we used a newly accrued dataset that has extra tough responsibilities internal and is in the direction of a slipshod user. The end result knowledgeable in 3 exclusive classes of photos and our proposed technique has 83% on average. The end result is as compared with the maximum famous record localization strategies and cell applications.


2018 ◽  
Vol 7 (3.1) ◽  
pp. 13
Author(s):  
Raveendra K ◽  
R Vinoth Kanna

Automatic logo based document image retrieval process is an essential and mostly used method in the feature extraction applications. In this paper the architecture of Convolutional Neural Network (CNN) was elaborately explained with pictorial representations in order to understand the complex Convolutional Neural Networks process in a simplified way. The main objective of this paper is to effectively utilize the CNN in the process of automatic logo based document image retrieval methods.  


2021 ◽  
Vol 2089 (1) ◽  
pp. 012013
Author(s):  
Priyadarshini Chatterjee ◽  
Dutta Sushama Rani

Abstract Automated diagnosis of diseases in the recent years have gain lots of advantages and potential. Specially automated screening of cancers has helped the clinicians over the time. Sometimes it is seen that the diagnosis of the clinicians is biased but automated detection can help them to come to a proper conclusion. Automated screening is implemented using either artificial inter connected system or convolutional inter connected system. As Artificial neural network is slow in computation, so Convolutional Neural Network has achieved lots of importance in the recent years. It is also seen that Convolutional Neural Network architecture requires a smaller number of datasets. This also provides them an edge over Artificial Neural Networks. Convolutional Neural Networks is used for both segmentation and classification. Image dissection is one of the important steps in the model used for any kind of image analysis. This paper surveys various such Convolutional Neural Networks that are used for medical image analysis.


Author(s):  
Victoria Wu

Introduction: Scoliosis, an excessive curvature of the spine, affects approximately 1 in 1,000 individuals. As a result, there have formerly been implementations of mandatory scoliosis screening procedures. Screening programs are no longer widely used as the harms often outweigh the benefits; it causes many adolescents to undergo frequent diagnosis X-ray procedure This makes spinal ultrasounds an ideal substitute for scoliosis screening in patients, as it does not expose them to those levels of radiation. Spinal curvatures can be accurately computed from the location of spinal transverse processes, by measuring the vertebral angle from a reference line [1]. However, ultrasound images are less clear than x-ray images, making it difficult to identify the spinal processes. To overcome this, we employ deep learning using a convolutional neural network, which is a powerful tool for computer vision and image classification [2]. Method: A total of 2,752 ultrasound images were recorded from a spine phantom to train a convolutional neural network. Subsequently, we took another recording of 747 images to be used for testing. All the ultrasound images from the scans were then segmented manually, using the 3D Slicer (www.slicer.org) software. Next, the dataset was fed through a convolutional neural network. The network used was a modified version of GoogLeNet (Inception v1), with 2 linearly stacked inception models. This network was chosen because it provided a balance between accurate performance, and time efficient computations. Results: Deep learning classification using the Inception model achieved an accuracy of 84% for the phantom scan.  Conclusion: The classification model performs with considerable accuracy. Better accuracy needs to be achieved, possibly with more available data and improvements in the classification model.  Acknowledgements: G. Fichtinger is supported as a Canada Research Chair in Computer-Integrated Surgery. This work was funded, in part, by NIH/NIBIB and NIH/NIGMS (via grant 1R01EB021396-01A1 - Slicer+PLUS: Point-of-Care Ultrasound) and by CANARIE’s Research Software Program.    Figure 1: Ultrasound scan containing a transverse process (left), and ultrasound scan containing no transverse process (right).                                Figure 2: Accuracy of classification for training (red) and validation (blue). References:           Ungi T, King F, Kempston M, Keri Z, Lasso A, Mousavi P, Rudan J, Borschneck DP, Fichtinger G. Spinal Curvature Measurement by Tracked Ultrasound Snapshots. Ultrasound in Medicine and Biology, 40(2):447-54, Feb 2014.           Krizhevsky A, Sutskeyer I, Hinton GE. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems 25:1097-1105. 


2021 ◽  
Vol 5 (2) ◽  
pp. 312-318
Author(s):  
Rima Dias Ramadhani ◽  
Afandi Nur Aziz Thohari ◽  
Condro Kartiko ◽  
Apri Junaidi ◽  
Tri Ginanjar Laksana ◽  
...  

Waste is goods / materials that have no value in the scope of production, where in some cases the waste is disposed of carelessly and can damage the environment. The Indonesian government in 2019 recorded waste reaching 66-67 million tons, which is higher than the previous year, which was 64 million tons. Waste is differentiated based on its type, namely organic and anorganic waste. In the field of computer science, the process of sensing the type waste can be done using a camera and the Convolutional Neural Networks (CNN) method, which is a type of neural network that works by receiving input in the form of images. The input will be trained using CNN architecture so that it will produce output that can recognize the object being inputted. This study optimizes the use of the CNN method to obtain accurate results in identifying types of waste. Optimization is done by adding several hyperparameters to the CNN architecture. By adding hyperparameters, the accuracy value is 91.2%. Meanwhile, if the hyperparameter is not used, the accuracy value is only 67.6%. There are three hyperparameters used to increase the accuracy value of the model. They are dropout, padding, and stride. 20% increase in dropout to increase training overfit. Whereas padding and stride are used to speed up the model training process.


Sign in / Sign up

Export Citation Format

Share Document