scholarly journals Delimiting cryptic morphological variation among human malaria vector species using convolutional neural networks

2020 ◽  
Vol 14 (12) ◽  
pp. e0008904
Author(s):  
Jannelle Couret ◽  
Danilo C. Moreira ◽  
Davin Bernier ◽  
Aria Mia Loberti ◽  
Ellen M. Dotson ◽  
...  

Deep learning is a powerful approach for distinguishing classes of images, and there is a growing interest in applying these methods to delimit species, particularly in the identification of mosquito vectors. Visual identification of mosquito species is the foundation of mosquito-borne disease surveillance and management, but can be hindered by cryptic morphological variation in mosquito vector species complexes such as the malaria-transmitting Anopheles gambiae complex. We sought to apply Convolutional Neural Networks (CNNs) to images of mosquitoes as a proof-of-concept to determine the feasibility of automatic classification of mosquito sex, genus, species, and strains using whole-body, 2D images of mosquitoes. We introduce a library of 1, 709 images of adult mosquitoes collected from 16 colonies of mosquito vector species and strains originating from five geographic regions, with 4 cryptic species not readily distinguishable morphologically even by trained medical entomologists. We present a methodology for image processing, data augmentation, and training and validation of a CNN. Our best CNN configuration achieved high prediction accuracies of 96.96% for species identification and 98.48% for sex. Our results demonstrate that CNNs can delimit species with cryptic morphological variation, 2 strains of a single species, and specimens from a single colony stored using two different methods. We present visualizations of the CNN feature space and predictions for interpretation of our results, and we further discuss applications of our findings for future applications in malaria mosquito surveillance.

2020 ◽  
Author(s):  
Jannelle Couret ◽  
Danilo C Moreira ◽  
Davin Bernier ◽  
Aria Mia Loberti ◽  
Ellen M Dotson ◽  
...  

Abstract Background Among mosquitoes that transmit human pathogens there are several species complexes, morphologically indistinguishable, distinct in their behavior and ecology, particularly as it relates to vectorial capacity. Morphological demarcation has remained largely intractable. Identification using molecular markers is the favored method in applied settings, prohibitively so, as the expense of such methods prevents the majority of mosquito surveillance programs to distinguish species within species complexes. Here, we apply computational methods in machine learning to delimit vector species within the Anopheles gambiae complex, which transmit human malaria and are notoriously difficult to distinguish based on morphology.Results We introduce a library of $1,709$ images collected from $15$ species colonies of mosquito vector species originating from five regions worldwide, with four sibling species from Africa that cannot be readily distinguished even amongst trained medical entomologists. Among them we include images of two strains of a single species, Anopheles gambiae s.s. . We varied storage method within a species to include two methods of storage, freshly frozen or dried, to test whether the same cage population could be distinguished by deep learning on this basis. We present a machine vision approach using Convolutional Neural Networks (CNN) with data augmentation that can achieve up to $97\%$ accuracy in distinguishing the genera, species, cryptic species, and between two strains of one species as well as storage method. We included three outgroup vector species from the Aedes and Culex genera known to transmit other human pathogens.Conclusion We demonstrate that deep learning models can delimit species with cryptic morphological variation, strains that are genetically differentiated but morphologically identical, and automate delineation of sex and storage method. These findings have broad reaching implications for the study of morphological variation and applications to the surveillance and study of human malaria.


2021 ◽  
Vol 11 (15) ◽  
pp. 6721
Author(s):  
Jinyeong Wang ◽  
Sanghwan Lee

In increasing manufacturing productivity with automated surface inspection in smart factories, the demand for machine vision is rising. Recently, convolutional neural networks (CNNs) have demonstrated outstanding performance and solved many problems in the field of computer vision. With that, many machine vision systems adopt CNNs to surface defect inspection. In this study, we developed an effective data augmentation method for grayscale images in CNN-based machine vision with mono cameras. Our method can apply to grayscale industrial images, and we demonstrated outstanding performance in the image classification and the object detection tasks. The main contributions of this study are as follows: (1) We propose a data augmentation method that can be performed when training CNNs with industrial images taken with mono cameras. (2) We demonstrate that image classification or object detection performance is better when training with the industrial image data augmented by the proposed method. Through the proposed method, many machine-vision-related problems using mono cameras can be effectively solved by using CNNs.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Adam Goodwin ◽  
Sanket Padmanabhan ◽  
Sanchit Hira ◽  
Margaret Glancey ◽  
Monet Slinowsky ◽  
...  

AbstractWith over 3500 mosquito species described, accurate species identification of the few implicated in disease transmission is critical to mosquito borne disease mitigation. Yet this task is hindered by limited global taxonomic expertise and specimen damage consistent across common capture methods. Convolutional neural networks (CNNs) are promising with limited sets of species, but image database requirements restrict practical implementation. Using an image database of 2696 specimens from 67 mosquito species, we address the practical open-set problem with a detection algorithm for novel species. Closed-set classification of 16 known species achieved 97.04 ± 0.87% accuracy independently, and 89.07 ± 5.58% when cascaded with novelty detection. Closed-set classification of 39 species produces a macro F1-score of 86.07 ± 1.81%. This demonstrates an accurate, scalable, and practical computer vision solution to identify wild-caught mosquitoes for implementation in biosurveillance and targeted vector control programs, without the need for extensive image database development for each new target region.


2021 ◽  
Vol 11 (1) ◽  
pp. 28
Author(s):  
Ivan Lorencin ◽  
Sandi Baressi Šegota ◽  
Nikola Anđelić ◽  
Anđela Blagojević ◽  
Tijana Šušteršić ◽  
...  

COVID-19 represents one of the greatest challenges in modern history. Its impact is most noticeable in the health care system, mostly due to the accelerated and increased influx of patients with a more severe clinical picture. These facts are increasing the pressure on health systems. For this reason, the aim is to automate the process of diagnosis and treatment. The research presented in this article conducted an examination of the possibility of classifying the clinical picture of a patient using X-ray images and convolutional neural networks. The research was conducted on the dataset of 185 images that consists of four classes. Due to a lower amount of images, a data augmentation procedure was performed. In order to define the CNN architecture with highest classification performances, multiple CNNs were designed. Results show that the best classification performances can be achieved if ResNet152 is used. This CNN has achieved AUCmacro¯ and AUCmicro¯ up to 0.94, suggesting the possibility of applying CNN to the classification of the clinical picture of COVID-19 patients using an X-ray image of the lungs. When higher layers are frozen during the training procedure, higher AUCmacro¯ and AUCmicro¯ values are achieved. If ResNet152 is utilized, AUCmacro¯ and AUCmicro¯ values up to 0.96 are achieved if all layers except the last 12 are frozen during the training procedure.


Mathematics ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 624
Author(s):  
Stefan Rohrmanstorfer ◽  
Mikhail Komarov ◽  
Felix Mödritscher

With the always increasing amount of image data, it has become a necessity to automatically look for and process information in these images. As fashion is captured in images, the fashion sector provides the perfect foundation to be supported by the integration of a service or application that is built on an image classification model. In this article, the state of the art for image classification is analyzed and discussed. Based on the elaborated knowledge, four different approaches will be implemented to successfully extract features out of fashion data. For this purpose, a human-worn fashion dataset with 2567 images was created, but it was significantly enlarged by the performed image operations. The results show that convolutional neural networks are the undisputed standard for classifying images, and that TensorFlow is the best library to build them. Moreover, through the introduction of dropout layers, data augmentation and transfer learning, model overfitting was successfully prevented, and it was possible to incrementally improve the validation accuracy of the created dataset from an initial 69% to a final validation accuracy of 84%. More distinct apparel like trousers, shoes and hats were better classified than other upper body clothes.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5192
Author(s):  
Maira Moran ◽  
Marcelo Faria ◽  
Gilson Giraldi ◽  
Luciana Bastos ◽  
Larissa Oliveira ◽  
...  

Dental caries is an extremely common problem in dentistry that affects a significant part of the population. Approximal caries are especially difficult to identify because their position makes clinical analysis difficult. Radiographic evaluation—more specifically, bitewing images—are mostly used in such cases. However, incorrect interpretations may interfere with the diagnostic process. To aid dentists in caries evaluation, computational methods and tools can be used. In this work, we propose a new method that combines image processing techniques and convolutional neural networks to identify approximal dental caries in bitewing radiographic images and classify them according to lesion severity. For this study, we acquired 112 bitewing radiographs. From these exams, we extracted individual tooth images from each exam, applied a data augmentation process, and used the resulting images to train CNN classification models. The tooth images were previously labeled by experts to denote the defined classes. We evaluated classification models based on the Inception and ResNet architectures using three different learning rates: 0.1, 0.01, and 0.001. The training process included 2000 iterations, and the best results were achieved by the Inception model with a 0.001 learning rate, whose accuracy on the test set was 73.3%. The results can be considered promising and suggest that the proposed method could be used to assist dentists in the evaluation of bitewing images, and the definition of lesion severity and appropriate treatments.


Sign in / Sign up

Export Citation Format

Share Document