scholarly journals Deep Learning Algorithms Improve Automated Identification of Chagas Disease Vectors

2019 ◽  
Vol 56 (5) ◽  
pp. 1404-1410 ◽  
Author(s):  
Ali Khalighifar ◽  
Ed Komp ◽  
Janine M Ramsey ◽  
Rodrigo Gurgel-Gonçalves ◽  
A Townsend Peterson

Abstract Vector-borne Chagas disease is endemic to the Americas and imposes significant economic and social burdens on public health. In a previous contribution, we presented an automated identification system that was able to discriminate among 12 Mexican and 39 Brazilian triatomine (Hemiptera: Reduviidae) species from digital images. To explore the same data more deeply using machine-learning approaches, hoping for improvements in classification, we employed TensorFlow, an open-source software platform for a deep learning algorithm. We trained the algorithm based on 405 images for Mexican triatomine species and 1,584 images for Brazilian triatomine species. Our system achieved 83.0 and 86.7% correct identification rates across all Mexican and Brazilian species, respectively, an improvement over comparable rates from statistical classifiers (80.3 and 83.9%, respectively). Incorporating distributional information to reduce numbers of species in analyses improved identification rates to 95.8% for Mexican species and 98.9% for Brazilian species. Given the ‘taxonomic impediment’ and difficulties in providing entomological expertise necessary to control such diseases, automating the identification process offers a potential partial solution to crucial challenges.

2018 ◽  
Author(s):  
Sebastien Villon ◽  
David Mouillot ◽  
Marc Chaumont ◽  
Emily S Darling ◽  
Gérard Subsol ◽  
...  

Identifying and counting individual fish on videos is a crucial task to cost-effectively monitor marine biodiversity, but it remains a difficult and time-consuming task. In this paper, we present a method to assist the automated identification of fish species on underwater images, and we compare our algorithm performances to human ability in terms of speed and accuracy. We first tested the performance of a convolutional neural network trained with different photographic databases while accounting for different post-processing decision rules to identify 20 fish species. Finally, we compared the performance in species identification of our best model with human performances on a test database of 1197 pictures representing nine species. The best network was the one trained with 900 000 pictures of whole fish and of their parts and environment (e.g. reef bottom or water). The rate of correct identification of fish was 94.9%, greater than the rate of correct identifications by humans (89.3%). The network was also able to identify fish individuals partially hidden behind corals or behind other fish and was more effective than humans identification on smallest or blurry pictures while humans were better to recognize fish individuals in unusual positions (e.g. twisted body). On average, each identification by our best algorithm using a common hardware took 0.06 seconds. Deep Learning methods can thus perform efficient fish identification on underwater pictures which pave the way to new video-based protocols for monitoring fish biodiversity cheaply and effectively.


2018 ◽  
Author(s):  
Sebastien Villon ◽  
David Mouillot ◽  
Marc Chaumont ◽  
Emily S Darling ◽  
Gérard Subsol ◽  
...  

Identifying and counting individual fish on videos is a crucial task to cost-effectively monitor marine biodiversity, but it remains a difficult and time-consuming task. In this paper, we present a method to assist the automated identification of fish species on underwater images, and we compare our algorithm performances to human ability in terms of speed and accuracy. We first tested the performance of a convolutional neural network trained with different photographic databases while accounting for different post-processing decision rules to identify 20 fish species. Finally, we compared the performance in species identification of our best model with human performances on a test database of 1197 pictures representing nine species. The best network was the one trained with 900 000 pictures of whole fish and of their parts and environment (e.g. reef bottom or water). The rate of correct identification of fish was 94.9%, greater than the rate of correct identifications by humans (89.3%). The network was also able to identify fish individuals partially hidden behind corals or behind other fish and was more effective than humans identification on smallest or blurry pictures while humans were better to recognize fish individuals in unusual positions (e.g. twisted body). On average, each identification by our best algorithm using a common hardware took 0.06 seconds. Deep Learning methods can thus perform efficient fish identification on underwater pictures which pave the way to new video-based protocols for monitoring fish biodiversity cheaply and effectively.


Author(s):  
K. Kranthi Kumar ◽  
Y. Kasiviswanadham ◽  
D.V.S.N.V. Indira ◽  
Pushpa Priyanka palesetti ◽  
Ch.V. Bhargavi

Author(s):  
Yang Xu ◽  
Priyojit Das ◽  
Rachel Patton McCord

Abstract Motivation Deep learning approaches have empowered single-cell omics data analysis in many ways and generated new insights from complex cellular systems. As there is an increasing need for single cell omics data to be integrated across sources, types, and features of data, the challenges of integrating single-cell omics data are rising. Here, we present an unsupervised deep learning algorithm that learns discriminative representations for single-cell data via maximizing mutual information, SMILE (Single-cell Mutual Information Learning). Results Using a unique cell-pairing design, SMILE successfully integrates multi-source single-cell transcriptome data, removing batch effects and projecting similar cell types, even from different tissues, into the shared space. SMILE can also integrate data from two or more modalities, such as joint profiling technologies using single-cell ATAC-seq, RNA-seq, DNA methylation, Hi-C, and ChIP data. When paired cells are known, SMILE can integrate data with unmatched feature, such as genes for RNA-seq and genome wide peaks for ATAC-seq. Integrated representations learned from joint profiling technologies can then be used as a framework for comparing independent single source data. Supplementary information Supplementary data are available at Bioinformatics online. The source code of SMILE including analyses of key results in the study can be found at: https://github.com/rpmccordlab/SMILE.


Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5523 ◽  
Author(s):  
Nada Alay ◽  
Heyam H. Al-Baity

With the increasing demand for information security and security regulations all over the world, biometric recognition technology has been widely used in our everyday life. In this regard, multimodal biometrics technology has gained interest and became popular due to its ability to overcome a number of significant limitations of unimodal biometric systems. In this paper, a new multimodal biometric human identification system is proposed, which is based on a deep learning algorithm for recognizing humans using biometric modalities of iris, face, and finger vein. The structure of the system is based on convolutional neural networks (CNNs) which extract features and classify images by softmax classifier. To develop the system, three CNN models were combined; one for iris, one for face, and one for finger vein. In order to build the CNN model, the famous pertained model VGG-16 was used, the Adam optimization method was applied and categorical cross-entropy was used as a loss function. Some techniques to avoid overfitting were applied, such as image augmentation and dropout techniques. For fusing the CNN models, different fusion approaches were employed to explore the influence of fusion approaches on recognition performance, therefore, feature and score level fusion approaches were applied. The performance of the proposed system was empirically evaluated by conducting several experiments on the SDUMLA-HMT dataset, which is a multimodal biometrics dataset. The obtained results demonstrated that using three biometric traits in biometric identification systems obtained better results than using two or one biometric traits. The results also showed that our approach comfortably outperformed other state-of-the-art methods by achieving an accuracy of 99.39%, with a feature level fusion approach and an accuracy of 100% with different methods of score level fusion.


Author(s):  
Yina Wu ◽  
Mohamed Abdel-Aty ◽  
Ou Zheng ◽  
Qing Cai ◽  
Shile Zhang

This paper presents an automated traffic safety diagnostics solution named “Automated Roadway Conflict Identification System” (ARCIS) that uses deep learning techniques to process traffic videos collected by unmanned aerial vehicle (UAV). Mask region convolutional neural network (R-CNN) is employed to improve detection of vehicles in UAV videos. The detected vehicles are tracked by a channel and spatial reliability tracking algorithm, and vehicle trajectories are generated based on the tracking algorithm. Missing vehicles can be identified and tracked by identifying stationary vehicles and comparing intersect of union (IOU) between the detection results and the tracking results. Rotated bounding rectangles based on the pixel-to-pixel manner masks that are generated by mask R-CNN detection are introduced to obtain precise vehicle size and location data. Based on the vehicle trajectories, post-encroachment time (PET) is calculated for each conflict event at the pixel level. By comparing the PET values and the threshold, conflicts with the corresponding pixels in which the conflicts happened can be reported. Various conflict types: rear-end, head on, sideswipe, and angle, can also be determined. A case study at a typical signalized intersection is presented; the results indicate that the proposed framework could significantly improve the accuracy of the output data. Moreover, safety diagnostics for the studied intersection are conducted by calculating the PET values for each conflict event. It is expected that the proposed detection and tracking method with UAVs could help diagnose road safety problems efficiently and appropriate countermeasures could then be proposed.


Author(s):  
Flora Amato ◽  
Stefano Marrone ◽  
Vincenzo Moscato ◽  
Gabriele Piantadosi ◽  
Antonio Picariello ◽  
...  

Data collection and analysis are becoming more and more important in a variety of application domains as long as the novel technologies advance. At the same time, we are experiencing a growing need for human-machine interaction with expert systems pushing research through new knowledge representation models and interaction paradigms. In particular, in the last years eHealth - that indicates all the health-care practices supported by electronic elaboration and remote communications - calls for the availability of smart environment and big computational resources. The aim of this paper is to introduce the HOLMeS (Health On-Line Medical Suggestions) framework. The introduced system proposes to change the eHealth paradigm where a trained machine learning algorithm, deployed on a cluster-computing environment, provides medical suggestion via both chat-bot and web-app modules. The chat-bot, based on deep learning approaches, is able to overcome the limitation of biased interaction between users and software, exhibiting a human-like behavior. Results demonstrate the effectiveness of the machine learning algorithms showing 74.65% of Area Under ROC Curve (AUC) when first-level features are used to assess the occurrence of different prevention pathways. When disease-specific features are added, HOLMeS shows 86.78% of AUC achieving a more specific prevention pathway evaluation.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Wen Pan ◽  
Xujia Li ◽  
Weijia Wang ◽  
Linjing Zhou ◽  
Jiali Wu ◽  
...  

Abstract Background Development of a deep learning method to identify Barrett's esophagus (BE) scopes in endoscopic images. Methods 443 endoscopic images from 187 patients of BE were included in this study. The gastroesophageal junction (GEJ) and squamous-columnar junction (SCJ) of BE were manually annotated in endoscopic images by experts. Fully convolutional neural networks (FCN) were developed to automatically identify the BE scopes in endoscopic images. The networks were trained and evaluated in two separate image sets. The performance of segmentation was evaluated by intersection over union (IOU). Results The deep learning method was proved to be satisfying in the automated identification of BE in endoscopic images. The values of the IOU were 0.56 (GEJ) and 0.82 (SCJ), respectively. Conclusions Deep learning algorithm is promising with accuracies of concordance with manual human assessment in segmentation of the BE scope in endoscopic images. This automated recognition method helps clinicians to locate and recognize the scopes of BE in endoscopic examinations.


Author(s):  
Ali Khalighifar ◽  
Daniel Jiménez-García ◽  
Lindsay P Campbell ◽  
Koffi Mensah Ahadji-Dabla ◽  
Fred Aboagye-Antwi ◽  
...  

Abstract Mosquito-borne diseases account for human morbidity and mortality worldwide, caused by the parasites (e.g., malaria) or viruses (e.g., dengue, Zika) transmitted through bites of infected female mosquitoes. Globally, billions of people are at risk of infection, imposing significant economic and public health burdens. As such, efficient methods to monitor mosquito populations and prevent the spread of these diseases are at a premium. One proposed technique is to apply acoustic monitoring to the challenge of identifying wingbeats of individual mosquitoes. Although researchers have successfully used wingbeats to survey mosquito populations, implementation of these techniques in areas most affected by mosquito-borne diseases remains challenging. Here, methods utilizing easily accessible equipment and encouraging community-scientist participation are more likely to provide sufficient monitoring. We present a practical, community-science-based method of monitoring mosquito populations using smartphones. We applied deep-learning algorithms (TensorFlow Inception v3) to spectrogram images generated from smartphone recordings associated with six mosquito species to develop a multiclass mosquito identification system, and flag potential invasive vectors not present in our sound reference library. Though TensorFlow did not flag potential invasive species with high accuracy, it was able to identify species present in the reference library at an 85% correct identification rate, an identification rate markedly higher than similar studies employing expensive recording devices. Given that we used smartphone recordings with limited sample sizes, these results are promising. With further optimization, we propose this novel technique as a way to accurately and efficiently monitor mosquito populations in areas where doing so is most critical.


Author(s):  
Shulin Xiang ◽  
Tao Chen ◽  
Zhichao Fan ◽  
Xuedong Chen ◽  
Zhigang Wu ◽  
...  

Abstract With the development of Materials Genome Initiative (MGI) and data mining technology, machine learning (ML) has emerged as an important tool in the research of materials science. For the heat resistant alloys used in furnace tubes, the rapid prediction of the high-temperature properties is critical but difficult until now. In this work, the ML method based on the deep learning algorithm is developed to establish the direct correlation between microstructure inputs and output stress rupture properties of Fe-Cr-Ni based heat resistant alloys. Two simple convolutional neural networks (CNN) and the complex network with VGG16 architecture are implemented and evaluated. The simple CNN and VGG16 models are trained from scratch and pre-trained, respectively. Due to the relatively few training samples in the dataset, the data augmentation configuration and the improved architecture are effective to mitigate overfitting in simple CNN models. The result also shows that in the case of transfer learning, the features extracted from other datasets can be used directly to this new visual task. It is demonstrated that both the simple CNN and VGG16 models reach the high prediction accuracies (more than 90 %) of high-temperature properties with a wide range of microstructures. In addition, the good prediction performance achieved in the small dataset also reveals the deep learning approaches can be used to construct powerful vision models in engineering practice, where very limited data is the common situation.


Sign in / Sign up

Export Citation Format

Share Document