scholarly journals Computational Prediction of Disease Detection and Insect Identification using Xception model

2021 ◽  
Author(s):  
Lidia Cleetus ◽  
Raji Sukumar ◽  
Hemalatha N

In this paper, a detection tool has been built for the detection and identification of the diseases and pests found in the crops at its earliest stage. For this, various deep learning architectures were experimented to see which one of those would help in building a more accurate and an efficient detection model. The deep learning architectures used in this study were Convolutional Neural Network, VGG16, InceptionV3, and Xception. VGG16, InceptionV3, and Xception are categorized as the pre-trained models based on CNN architecture. They follow the concept of transfer learning. Transfer learning is a technique which makes use of the learnings of the models previously trained on a base data and applies it to the present dataset. This is an efficient technique which gives us rapid results and improved performance. Two plant datasets have been used here for disease and insects. The results of the algorithms were then compared. Most successful one has been the Xception model which obtained 82.89 for disease and 77.9 for pests.

Measurement ◽  
2021 ◽  
pp. 109953
Author(s):  
Adhiyaman Manickam ◽  
Jianmin Jiang ◽  
Yu Zhou ◽  
Abhinav Sagar ◽  
Rajkumar Soundrapandiyan ◽  
...  

2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Chao Su ◽  
Wenjun Wang

Crack plays a critical role in the field of evaluating the quality of concrete structures, which affects the safety, applicability, and durability of the structure. Due to its excellent performance in image processing, the convolutional neural network is becoming the mainstream choice to replace manual crack detection. In this paper, we improve the EfficientNetB0 to realize the detection of concrete surface cracks using the transfer learning method. The model is designed by neural architecture search technology. The weights are pretrained on the ImageNet. Supervised learning uses Adam optimizer to update network parameters. In the testing process, crack images from different locations were used to further test the generalization capability of the model. By comparing the detection results with the MobileNetV2, DenseNet201, and InceptionV3 models, the results show that our model greatly reduces the number of parameters while achieving high accuracy (0.9911) and has good generalization capability. Our model is an efficient detection model, which provides a new option for crack detection in areas with limited computing resources.


Diagnostics ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 2109
Author(s):  
Skandha S. Sanagala ◽  
Andrew Nicolaides ◽  
Suneet K. Gupta ◽  
Vijaya K. Koppula ◽  
Luca Saba ◽  
...  

Background and Purpose: Only 1–2% of the internal carotid artery asymptomatic plaques are unstable as a result of >80% stenosis. Thus, unnecessary efforts can be saved if these plaques can be characterized and classified into symptomatic and asymptomatic using non-invasive B-mode ultrasound. Earlier plaque tissue characterization (PTC) methods were machine learning (ML)-based, which used hand-crafted features that yielded lower accuracy and unreliability. The proposed study shows the role of transfer learning (TL)-based deep learning models for PTC. Methods: As pertained weights were used in the supercomputer framework, we hypothesize that transfer learning (TL) provides improved performance compared with deep learning. We applied 11 kinds of artificial intelligence (AI) models, 10 of them were augmented and optimized using TL approaches—a class of Atheromatic™ 2.0 TL (AtheroPoint™, Roseville, CA, USA) that consisted of (i–ii) Visual Geometric Group-16, 19 (VGG16, 19); (iii) Inception V3 (IV3); (iv–v) DenseNet121, 169; (vi) XceptionNet; (vii) ResNet50; (viii) MobileNet; (ix) AlexNet; (x) SqueezeNet; and one DL-based (xi) SuriNet-derived from UNet. We benchmark 11 AI models against our earlier deep convolutional neural network (DCNN) model. Results: The best performing TL was MobileNet, with accuracy and area-under-the-curve (AUC) pairs of 96.10 ± 3% and 0.961 (p < 0.0001), respectively. In DL, DCNN was comparable to SuriNet, with an accuracy of 95.66% and 92.7 ± 5.66%, and an AUC of 0.956 (p < 0.0001) and 0.927 (p < 0.0001), respectively. We validated the performance of the AI architectures with established biomarkers such as greyscale median (GSM), fractal dimension (FD), higher-order spectra (HOS), and visual heatmaps. We benchmarked against previously developed Atheromatic™ 1.0 ML and showed an improvement of 12.9%. Conclusions: TL is a powerful AI tool for PTC into symptomatic and asymptomatic plaques.


Forecasting ◽  
2021 ◽  
Vol 3 (4) ◽  
pp. 741-762
Author(s):  
Panagiotis Stalidis ◽  
Theodoros Semertzidis ◽  
Petros Daras

In this paper, a detailed study on crime classification and prediction using deep learning architectures is presented. We examine the effectiveness of deep learning algorithms in this domain and provide recommendations for designing and training deep learning systems for predicting crime areas, using open data from police reports. Having time-series of crime types per location as training data, a comparative study of 10 state-of-the-art methods against 3 different deep learning configurations is conducted. In our experiments with 5 publicly available datasets, we demonstrate that the deep learning-based methods consistently outperform the existing best-performing methods. Moreover, we evaluate the effectiveness of different parameters in the deep learning architectures and give insights for configuring them to achieve improved performance in crime classification and finally crime prediction.


PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0259036
Author(s):  
Diah Harnoni Apriyanti ◽  
Luuk J. Spreeuwers ◽  
Peter J. F. Lucas ◽  
Raymond N. J. Veldhuis

The color of particular parts of a flower is often employed as one of the features to differentiate between flower types. Thus, color is also used in flower-image classification. Color labels, such as ‘green’, ‘red’, and ‘yellow’, are used by taxonomists and lay people alike to describe the color of plants. Flower image datasets usually only consist of images and do not contain flower descriptions. In this research, we have built a flower-image dataset, especially regarding orchid species, which consists of human-friendly textual descriptions of features of specific flowers, on the one hand, and digital photographs indicating how a flower looks like, on the other hand. Using this dataset, a new automated color detection model was developed. It is the first research of its kind using color labels and deep learning for color detection in flower recognition. As deep learning often excels in pattern recognition in digital images, we applied transfer learning with various amounts of unfreezing of layers with five different neural network architectures (VGG16, Inception, Resnet50, Xception, Nasnet) to determine which architecture and which scheme of transfer learning performs best. In addition, various color scheme scenarios were tested, including the use of primary and secondary color together, and, in addition, the effectiveness of dealing with multi-class classification using multi-class, combined binary, and, finally, ensemble classifiers were studied. The best overall performance was achieved by the ensemble classifier. The results show that the proposed method can detect the color of flower and labellum very well without having to perform image segmentation. The result of this study can act as a foundation for the development of an image-based plant recognition system that is able to offer an explanation of a provided classification.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Amit Doegar ◽  
Srinidhi Hiriyannaiah ◽  
G. M. Siddesh ◽  
K. G. Srinivasa ◽  
Maitreyee Dutta

Cloud computing has evolved in various application areas such as medical imaging and bioinformatics. It raises the issues of privacy and tampering in the images especially related to the medical field and bioinformatics for various reasons. The digital images are quite vulnerable to be tampered by the interceptors. The credibility of individuals can transform through falsified information in the images. Image tampering detection is an approach to identifying and finding the tampered components in the image. For the efficient detection of image tampering, the sufficient number of features are required which can be achieved by a deep learning architecture-based models without manual feature extraction of functions. In this research work, we have presented and implemented a cloud-based residual exploitation-based deep learning architectures to detect whether or not an image is being tampered. The proposed approach is implemented on the publicly available benchmark MICC-F220 dataset with the k -fold cross-validation approach to avoid the overfitting problem and to evaluate the performance metrics.


Author(s):  
Pritam Ghosh ◽  
Subhranil Mustafi ◽  
Satyendra Nath Mandal

In this paper an attempt has been made to identify six different goat breeds from pure breed goat images. The images of goat breeds have been captured from different organized registered goat farms in India, and almost two thousand digital images of individual goats were captured in restricted (to get similar image background) and unrestricted (natural) environments without imposing stress to animals. A pre-trained deep learning-based object detection model called Faster R-CNN has been fine-tuned by using transfer-learning on the acquired images for automatic classification and localization of goat breeds. This fine-tuned model is able to locate the goat (localize) and classify (identify) its breed in the image. The Pascal VOC object detection evaluation metrics have been used to evaluate this model. Finally, comparison has been made with prediction accuracies of different technologies used for different animal breed identification.


2020 ◽  
Vol 8 (6) ◽  
pp. 1959-1963

Deep learning is a one of the major concept of Artificial Intelligence and Machine learning, which deals with the object detection task. On the other hand, a new targeted dataset is built according to commonly used existing datasets, and two networks called Single Shot Multi box Detector (SSD) and You Only Look Once (YOLO) are chosen to work on this new dataset. Through experimentation strengthen the understanding of these networks, and through the analysis of the results, learn the importance of targeted and inclusive datasets for deep learning. In addition, to this optimize the networks for efficient utilization when integrated with the necessary system or application. Further, explore the applications corresponding to these networks. The implementation includes two major concepts. The first concept is Object detection. Object detection is the process of object recognition and classification. There are several Training sets available online for training an object detection model. But the models are not trained to detect the same object from different geographical regions. The second concept is lane detection and steering suggestion. The model detects using the concept of radius or curvature of the road and also distance of the car from both the lane lines. Using these parameters it also gives steering suggestions such as move right or left by a certain distance. In addition to this it gives the distance and speed attributes of the surrounding objects such as cars, motorcycles, etc. Finally, the model developed is capable of detecting all the parameters required in order to be integrated and to create a self-driving car and it can be used efficiently in India. Using the above parameters that are obtained from the model the car can navigate through lanes in real-time. Its improved performance is due to the fact that it can detect road specific objects and because it is specifically trained for Indian roads.


Sign in / Sign up

Export Citation Format

Share Document