scholarly journals Implementation of a deep learning model for automated classification of Aedes aegypti (Linnaeus) and Aedes albopictus (Skuse) in real time

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Song-Quan Ong ◽  
Hamdan Ahmad ◽  
Gomesh Nair ◽  
Pradeep Isawasan ◽  
Abdul Hafiz Ab Majid

AbstractClassification of Aedes aegypti (Linnaeus) and Aedes albopictus (Skuse) by humans remains challenging. We proposed a highly accessible method to develop a deep learning (DL) model and implement the model for mosquito image classification by using hardware that could regulate the development process. In particular, we constructed a dataset with 4120 images of Aedes mosquitoes that were older than 12 days old and had common morphological features that disappeared, and we illustrated how to set up supervised deep convolutional neural networks (DCNNs) with hyperparameter adjustment. The model application was first conducted by deploying the model externally in real time on three different generations of mosquitoes, and the accuracy was compared with human expert performance. Our results showed that both the learning rate and epochs significantly affected the accuracy, and the best-performing hyperparameters achieved an accuracy of more than 98% at classifying mosquitoes, which showed no significant difference from human-level performance. We demonstrated the feasibility of the method to construct a model with the DCNN when deployed externally on mosquitoes in real time.

2021 ◽  
Vol 11 (22) ◽  
pp. 10528
Author(s):  
Khin Yadanar Win ◽  
Noppadol Maneerat ◽  
Syna Sreng ◽  
Kazuhiko Hamamoto

The ongoing COVID-19 pandemic has caused devastating effects on humanity worldwide. With practical advantages and wide accessibility, chest X-rays (CXRs) play vital roles in the diagnosis of COVID-19 and the evaluation of the extent of lung damages incurred by the virus. This study aimed to leverage deep-learning-based methods toward the automated classification of COVID-19 from normal and viral pneumonia on CXRs, and the identification of indicative regions of COVID-19 biomarkers. Initially, we preprocessed and segmented the lung regions usingDeepLabV3+ method, and subsequently cropped the lung regions. The cropped lung regions were used as inputs to several deep convolutional neural networks (CNNs) for the prediction of COVID-19. The dataset was highly unbalanced; the vast majority were normal images, with a small number of COVID-19 and pneumonia images. To remedy the unbalanced distribution and to avoid biased classification results, we applied five different approaches: (i) balancing the class using weighted loss; (ii) image augmentation to add more images to minority cases; (iii) the undersampling of majority classes; (iv) the oversampling of minority classes; and (v) a hybrid resampling approach of oversampling and undersampling. The best-performing methods from each approach were combined as the ensemble classifier using two voting strategies. Finally, we used the saliency map of CNNs to identify the indicative regions of COVID-19 biomarkers which are deemed useful for interpretability. The algorithms were evaluated using the largest publicly available COVID-19 dataset. An ensemble of the top five CNNs with image augmentation achieved the highest accuracy of 99.23% and area under curve (AUC) of 99.97%, surpassing the results of previous studies.


2020 ◽  
pp. 2361-2370
Author(s):  
Elham Mohammed Thabit A. ALSAADI ◽  
Nidhal K. El Abbadi

Detection and classification of animals is a major challenge that is facing the researchers. There are five classes of vertebrate animals, namely the Mammals, Amphibians, Reptiles, Birds, and Fish, and each type includes many thousands of different animals. In this paper, we propose a new model based on the training of deep convolutional neural networks (CNN) to detect and classify two classes of vertebrate animals (Mammals and Reptiles). Deep CNNs are the state of the art in image recognition and are known for their high learning capacity, accuracy, and robustness to typical object recognition challenges. The dataset of this system contains 6000 images, including 4800 images for training. The proposed algorithm was tested by using 1200 images. The accuracy of the system’s prediction for the target object was 97.5%.


2006 ◽  
Vol 14 (7S_Part_19) ◽  
pp. P1067-P1068
Author(s):  
Pradeep Anand Ravindranath ◽  
Rema Raman ◽  
Tiffany W. Chow ◽  
Michael S. Rafii ◽  
Paul S. Aisen ◽  
...  

2017 ◽  
Vol 2017 ◽  
pp. 1-6 ◽  
Author(s):  
Atsushi Teramoto ◽  
Tetsuya Tsukamoto ◽  
Yuka Kiriyama ◽  
Hiroshi Fujita

Lung cancer is a leading cause of death worldwide. Currently, in differential diagnosis of lung cancer, accurate classification of cancer types (adenocarcinoma, squamous cell carcinoma, and small cell carcinoma) is required. However, improving the accuracy and stability of diagnosis is challenging. In this study, we developed an automated classification scheme for lung cancers presented in microscopic images using a deep convolutional neural network (DCNN), which is a major deep learning technique. The DCNN used for classification consists of three convolutional layers, three pooling layers, and two fully connected layers. In evaluation experiments conducted, the DCNN was trained using our original database with a graphics processing unit. Microscopic images were first cropped and resampled to obtain images with resolution of 256 × 256 pixels and, to prevent overfitting, collected images were augmented via rotation, flipping, and filtering. The probabilities of three types of cancers were estimated using the developed scheme and its classification accuracy was evaluated using threefold cross validation. In the results obtained, approximately 71% of the images were classified correctly, which is on par with the accuracy of cytotechnologists and pathologists. Thus, the developed scheme is useful for classification of lung cancers from microscopic images.


Author(s):  
Naveen Bokka ◽  
JAY KARHADE ◽  
Parikshit Sahatiya

: Respiration rate is a vital parameter which is useful for earlier identification of diseases. In this context, various types of devices have been fabricated and developed to monitor different...


2020 ◽  
Vol 12 (21) ◽  
pp. 9177
Author(s):  
Vishal Mandal ◽  
Abdul Rashid Mussah ◽  
Peng Jin ◽  
Yaw Adu-Gyamfi

Manual traffic surveillance can be a daunting task as Traffic Management Centers operate a myriad of cameras installed over a network. Injecting some level of automation could help lighten the workload of human operators performing manual surveillance and facilitate making proactive decisions which would reduce the impact of incidents and recurring congestion on roadways. This article presents a novel approach to automatically monitor real time traffic footage using deep convolutional neural networks and a stand-alone graphical user interface. The authors describe the results of research received in the process of developing models that serve as an integrated framework for an artificial intelligence enabled traffic monitoring system. The proposed system deploys several state-of-the-art deep learning algorithms to automate different traffic monitoring needs. Taking advantage of a large database of annotated video surveillance data, deep learning-based models are trained to detect queues, track stationary vehicles, and tabulate vehicle counts. A pixel-level segmentation approach is applied to detect traffic queues and predict severity. Real-time object detection algorithms coupled with different tracking systems are deployed to automatically detect stranded vehicles as well as perform vehicular counts. At each stage of development, interesting experimental results are presented to demonstrate the effectiveness of the proposed system. Overall, the results demonstrate that the proposed framework performs satisfactorily under varied conditions without being immensely impacted by environmental hazards such as blurry camera views, low illumination, rain, or snow.


2011 ◽  
Vol 7 (S285) ◽  
pp. 355-357 ◽  
Author(s):  
Ashish A. Mahabal ◽  
C. Donalek ◽  
S. G. Djorgovski ◽  
A. J. Drake ◽  
M. J. Graham ◽  
...  

AbstractAn automated rapid classification of the transient events detected in modern synoptic sky surveys is essential for their scientific utility and effective follow-up when resources are scarce. This problem will grow by orders of magnitude with the next generation of surveys. We are exploring a variety of novel automated classification techniques, mostly Bayesian, to respond to those challenges, using the ongoing CRTS sky survey as a testbed. We describe briefly some of the methods used.


Sign in / Sign up

Export Citation Format

Share Document