scholarly journals Deep learning-based system development for black pine bast scale detection

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Wonsub Yun ◽  
J. Praveen Kumar ◽  
Sangjoon Lee ◽  
Dong-Soo Kim ◽  
Byoung-Kwan Cho

AbstractThe prevention of the loss of agricultural resources caused by pests is an important issue. Advances are being made in technologies, but current farm management methods and equipment have not yet met the level required for precise pest control, and most rely on manual management by professional workers. Hence, a pest detection system based on deep learning was developed for the automatic pest density measurement. In the proposed system, an image capture device for pheromone traps was developed to solve nonuniform shooting distance and the reflection of the outer vinyl of the trap while capturing the images. Since the black pine bast scale pest is small, pheromone traps are captured as several subimages and they are used for training the deep learning model. Finally, they are integrated by an image stitching algorithm to form an entire trap image. These processes are managed with the developed smartphone application. The deep learning model detects the pests in the image. The experimental results indicate that the model achieves an F1 score of 0.90 and mAP of 94.7% and suggest that a deep learning model based on object detection can be used for quick and automatic detection of pests attracted to pheromone traps.

Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 344
Author(s):  
Jeyaprakash Hemalatha ◽  
S. Abijah Roseline ◽  
Subbiah Geetha ◽  
Seifedine Kadry ◽  
Robertas Damaševičius

Recently, there has been a huge rise in malware growth, which creates a significant security threat to organizations and individuals. Despite the incessant efforts of cybersecurity research to defend against malware threats, malware developers discover new ways to evade these defense techniques. Traditional static and dynamic analysis methods are ineffective in identifying new malware and pose high overhead in terms of memory and time. Typical machine learning approaches that train a classifier based on handcrafted features are also not sufficiently potent against these evasive techniques and require more efforts due to feature-engineering. Recent malware detectors indicate performance degradation due to class imbalance in malware datasets. To resolve these challenges, this work adopts a visualization-based method, where malware binaries are depicted as two-dimensional images and classified by a deep learning model. We propose an efficient malware detection system based on deep learning. The system uses a reweighted class-balanced loss function in the final classification layer of the DenseNet model to achieve significant performance improvements in classifying malware by handling imbalanced data issues. Comprehensive experiments performed on four benchmark malware datasets show that the proposed approach can detect new malware samples with higher accuracy (98.23% for the Malimg dataset, 98.46% for the BIG 2015 dataset, 98.21% for the MaleVis dataset, and 89.48% for the unseen Malicia dataset) and reduced false-positive rates when compared with conventional malware mitigation techniques while maintaining low computational time. The proposed malware detection solution is also reliable and effective against obfuscation attacks.


PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0262349
Author(s):  
Esraa A. Mohamed ◽  
Essam A. Rashed ◽  
Tarek Gaber ◽  
Omar Karam

Breast cancer is one of the most common diseases among women worldwide. It is considered one of the leading causes of death among women. Therefore, early detection is necessary to save lives. Thermography imaging is an effective diagnostic technique which is used for breast cancer detection with the help of infrared technology. In this paper, we propose a fully automatic breast cancer detection system. First, U-Net network is used to automatically extract and isolate the breast area from the rest of the body which behaves as noise during the breast cancer detection model. Second, we propose a two-class deep learning model, which is trained from scratch for the classification of normal and abnormal breast tissues from thermal images. Also, it is used to extract more characteristics from the dataset that is helpful in training the network and improve the efficiency of the classification process. The proposed system is evaluated using real data (A benchmark, database (DMR-IR)) and achieved accuracy = 99.33%, sensitivity = 100% and specificity = 98.67%. The proposed system is expected to be a helpful tool for physicians in clinical use.


BMJ Open ◽  
2020 ◽  
Vol 10 (9) ◽  
pp. e036423
Author(s):  
Zhigang Song ◽  
Chunkai Yu ◽  
Shuangmei Zou ◽  
Wenmiao Wang ◽  
Yong Huang ◽  
...  

ObjectivesThe microscopic evaluation of slides has been gradually moving towards all digital in recent years, leading to the possibility for computer-aided diagnosis. It is worthwhile to know the similarities between deep learning models and pathologists before we put them into practical scenarios. The simple criteria of colorectal adenoma diagnosis make it to be a perfect testbed for this study.DesignThe deep learning model was trained by 177 accurately labelled training slides (156 with adenoma). The detailed labelling was performed on a self-developed annotation system based on iPad. We built the model based on DeepLab v2 with ResNet-34. The model performance was tested on 194 test slides and compared with five pathologists. Furthermore, the generalisation ability of the learning model was tested by extra 168 slides (111 with adenoma) collected from two other hospitals.ResultsThe deep learning model achieved an area under the curve of 0.92 and obtained a slide-level accuracy of over 90% on slides from two other hospitals. The performance was on par with the performance of experienced pathologists, exceeding the average pathologist. By investigating the feature maps and cases misdiagnosed by the model, we found the concordance of thinking process in diagnosis between the deep learning model and pathologists.ConclusionsThe deep learning model for colorectal adenoma diagnosis is quite similar to pathologists. It is on-par with pathologists’ performance, makes similar mistakes and learns rational reasoning logics. Meanwhile, it obtains high accuracy on slides collected from different hospitals with significant staining configuration variations.


2021 ◽  
Vol 7 ◽  
pp. e551
Author(s):  
Nihad Karim Chowdhury ◽  
Muhammad Ashad Kabir ◽  
Md. Muhtadir Rahman ◽  
Noortaz Rezoana

The goal of this research is to develop and implement a highly effective deep learning model for detecting COVID-19. To achieve this goal, in this paper, we propose an ensemble of Convolutional Neural Network (CNN) based on EfficientNet, named ECOVNet, to detect COVID-19 from chest X-rays. To make the proposed model more robust, we have used one of the largest open-access chest X-ray data sets named COVIDx containing three classes—COVID-19, normal, and pneumonia. For feature extraction, we have applied an effective CNN structure, namely EfficientNet, with ImageNet pre-training weights. The generated features are transferred into custom fine-tuned top layers followed by a set of model snapshots. The predictions of the model snapshots (which are created during a single training) are consolidated through two ensemble strategies, i.e., hard ensemble and soft ensemble, to enhance classification performance. In addition, a visualization technique is incorporated to highlight areas that distinguish classes, thereby enhancing the understanding of primal components related to COVID-19. The results of our empirical evaluations show that the proposed ECOVNet model outperforms the state-of-the-art approaches and significantly improves detection performance with 100% recall for COVID-19 and overall accuracy of 96.07%. We believe that ECOVNet can enhance the detection of COVID-19 disease, and thus, underpin a fully automated and efficacious COVID-19 detection system.


Electronics ◽  
2020 ◽  
Vol 9 (7) ◽  
pp. 1151 ◽  
Author(s):  
Wooyeon Jo ◽  
Sungjin Kim ◽  
Changhoon Lee ◽  
Taeshik Shon

The proliferation of various connected platforms, including Internet of things, industrial control systems (ICSs), connected cars, and in-vehicle networks, has resulted in the simultaneous use of multiple protocols and devices. Chaotic situations caused by the usage of different protocols and various types of devices, such as heterogeneous networks, implemented differently by vendors renders the adoption of a flexible security solution difficult, such as recent deep learning-based intrusion detection system (IDS) studies. These studies optimized the deep learning model for their environment to improve performance, but the basic principle of the deep learning model used was not changed, so this can be called a next-generation IDS with a model that has little or no requirements. Some studies proposed IDS based on unsupervised learning technology that does not require labeled data. However, not using available assets, such as network packet data, is a waste of resources. If the security solution considers the role and importance of the devices constituting the network and the security area of the protocol standard by experts, the assets can be well used, but it will no longer be flexible. Most deep learning model-based IDS studies used recurrent neural network (RNN), which is a supervised learning model, because the characteristics of the RNN model, especially when the long-short term memory (LSTM) is incorporated, are better configured to reflect the flow of the packet data stream over time, and thus perform better than other supervised learning models such as convolutional neural network (CNN). However, if the input data induce the CNN’s kernel to sufficiently reflect the network characteristics through proper preprocessing, it could perform better than other deep learning models in the network IDS. Hence, we propose the first preprocessing method, called “direct”, for network IDS that can use the characteristics of the kernel by using the minimum protocol information, field size, and offset. In addition to direct, we propose two more preprocessing techniques called “weighted” and “compressed”. Each requires additional network information; therefore, direct conversion was compared with related studies. Including direct, the proposed preprocessing methods are based on field-to-pixel philosophy, which can reflect the advantages of CNN by extracting the convolutional features of each pixel. Direct is the most intuitive method of applying field-to-pixel conversion to reflect an image’s convolutional characteristics in the CNN. Weighted and compressed are conversion methods used to evaluate the direct method. Consequently, the IDS constructed using a CNN with the proposed direct preprocessing method demonstrated meaningful performance in the NSL-KDD dataset.


Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5731 ◽  
Author(s):  
Xiu-Zhi Chen ◽  
Chieh-Min Chang ◽  
Chao-Wei Yu ◽  
Yen-Lin Chen

Numerous vehicle detection methods have been proposed to obtain trustworthy traffic data for the development of intelligent traffic systems. Most of these methods perform sufficiently well under common scenarios, such as sunny or cloudy days; however, the detection accuracy drastically decreases under various bad weather conditions, such as rainy days or days with glare, which normally happens during sunset. This study proposes a vehicle detection system with a visibility complementation module that improves detection accuracy under various bad weather conditions. Furthermore, the proposed system can be implemented without retraining the deep learning models for object detection under different weather conditions. The complementation of the visibility was obtained through the use of a dark channel prior and a convolutional encoder–decoder deep learning network with dual residual blocks to resolve different effects from different bad weather conditions. We validated our system on multiple surveillance videos by detecting vehicles with the You Only Look Once (YOLOv3) deep learning model and demonstrated that the computational time of our system could reach 30 fps on average; moreover, the accuracy increased not only by nearly 5% under low-contrast scene conditions but also 50% under rainy scene conditions. The results of our demonstrations indicate that our approach is able to detect vehicles under various bad weather conditions without the need to retrain a new model.


High pace rise in Glaucoma, an irreversible eye disease that deteriorates vision capacity of human has alarmed academia-industries to develop a novel and robust Computer Aided Diagnosis (CAD) system for early Glaucomatic eye detection. The main root cause for glaucoma growth depends on its structural alterations in the retina and is very much essential for ophthalmologists to identify it at an initial period to stop its progression. Fundoscopy is among one of the biomedical imaging techniques to analyze the internal structure of retina. Recently, numerous efforts have been made to exploit SpatialTemporal features including morphological values of Optical Disk (OD), Optical Cup (OC), Neuro-Retinal Rim (NRR) etc to perform Glaucoma detection in fundus images. Here, some issues like: suitable pre-processing, precise Region of Interest segmentation, post-segmentation and lack of generalized threshold limits efficacy of the major existing approaches. Furthermore, the optimal segmentation of OD and OC, nerves removal from OD or OC is often tedious and demands more efficient solution. However, these approaches cumulatively turn out to be computationally complex and time-consuming. As potential alternative, deep learning techniques have gained widespread attention, especially for image analysis or vision technologies. With this motive, in this paper, the authors proposed a novel Convolutional Stacked Auto-Encoder (CSAE) assisted Deep Learning Model for Glaucoma Detection and Classification model named GlaucoNet. Unlike classical methods, GlaucoNet applies Stacked Auto-Encoder by using hierarchical CNN structure to perform deep feature extraction and learning. By adapting complex data nature, and large features, GlaucoNet was designed with three layers: convolutional layer (CONV), Max-pool layer (MP) and two Fully Connected (FC) layers where the first performs feature extraction and learning, while second exhibits feature selection followed by the reduction of spatial resolution of the individual feature map to avoid large number of parameters and computational complexities. To avoid saturation problem in this work, by marking an applied dropout as 0.5. MATLAB based simulation-results with DRISHTI-GS and DRION-DB datasets affirmed that the proposed GlaucoNet model outperforms as compared to other state-of-art techniques: neural network based approaches in terms of accuracy, recall, precision, F-Measure and balanced accuracy. The overall parametric measured values shown better performance for GlaucoNet model.


Author(s):  
Rajeshvaree Ravindra Karmarkar ◽  
Prof.V.N Honmane

—As object recognition technology has developed recently, various technologies have been applied to autonomousvehicles, robots, and industrial facilities. However, the benefits ofthese technologies are not reaching the visually impaired, who need it the most. This paper proposed an object detection system for the blind using deep learning technologies. Furthermore, a voice guidance technique is used to inform sight impaired persons as to the location of objects. The object recognition deep learning model utilizes the You Only Look Once(YOLO) algorithm and a voice announcement is synthesized using text-tospeech (TTS) to make it easier for the blind to get information about objects. Asa result, it implements an efficient object-detection system that helps the blind find objects in a specific space without help from others, and the system is analyzed through experiments to verify performance.


Author(s):  
Pranavi Pendyala ◽  
Aviva Munshi ◽  
Anoushka Mehra

Detecting the driver's drowsiness in a consistent and confident manner is a difficult job because it necessitates careful observation of facial behaviour such as eye-closure, blinking, and yawning. It's much more difficult to deal with when they're wearing sunglasses or a scarf, as seen in the data collection for this competition. A drowsy person makes a variety of facial gestures, such as quick and repetitive blinking, shaking their heads, and yawning often. Drivers' drowsiness levels are commonly determined by assessing their abnormal behaviours using computerised, nonintrusive behavioural approaches. Using computer vision techniques to track a driver's sleepiness in a non-invasive manner. The aim of this paper is to calculate the current behaviour of the driver's eyes, which is visualised by the camera, so that we can check the driver's drowsiness. We present a drowsiness detection framework that uses Python, OpenCV, and Keras to notify the driver when he feels sleepy. We will use OpenCV to gather images from a webcam and feed them into a Deep Learning model that will classify whether the person's eyes are "Open" or "Closed" in this article.


Sign in / Sign up

Export Citation Format

Share Document