scholarly journals Deep learning algorithms for rotating machinery intelligent diagnosis: An open source benchmark study

2020 ◽  
Vol 107 ◽  
pp. 224-255 ◽  
Author(s):  
Zhibin Zhao ◽  
Tianfu Li ◽  
Jingyao Wu ◽  
Chuang Sun ◽  
Shibin Wang ◽  
...  
2022 ◽  
Author(s):  
Nils Koerber

In recent years the amount of data generated by imaging techniques has grown rapidly along with increasing computational power and the development of deep learning algorithms. To address the need for powerful automated image analysis tools for a broad range of applications in the biomedical sciences, we present the Microscopic Image Analyzer (MIA). MIA combines a graphical user interface that obviates the need for programming skills with state-of-the-art deep learning algorithms for segmentation, object detection, and classification. It runs as a standalone, platform-independent application and is compatible with commonly used open source software packages. The software provides a unified interface for easy image labeling, model training and inference. Furthermore the software was evaluated in a public competition and performed among the top three for all tested data sets. The source code is available on https://github.com/MIAnalyzer/MIA.


Author(s):  
Migran N. Gevorkyan ◽  
Anastasia V. Demidova ◽  
Dmitry S. Kulyabov

The history of using machine learning algorithms to analyze statistical models is quite long. The development of computer technology has given these algorithms a new breath. Nowadays deep learning is mainstream and most popular area in machine learning. However, the authors believe that many researchers are trying to use deep learning methods beyond their applicability. This happens because of the widespread availability of software systems that implement deep learning algorithms, and the apparent simplicity of research. All this motivate the authors to compare deep learning algorithms and classical machine learning algorithms. The Large Hadron Collider experiment is chosen for this task, because the authors are familiar with this scientific field, and also because the experiment data is open source. The article compares various machine learning algorithms in relation to the problem of recognizing the decay reaction + + + at the Large Hadron Collider. The authors use open source implementations of machine learning algorithms. We compare algorithms with each other based on calculated metrics. As a result of the research, we can conclude that all the considered machine learning methods are quite comparable with each other (taking into account the selected metrics), while different methods have different areas of applicability.


2017 ◽  
Author(s):  
Oisin Mac Aodha ◽  
Rory Gibb ◽  
Kate E. Barlow ◽  
Ella Browning ◽  
Michael Firman ◽  
...  

SummaryPassive acoustic sensing has emerged as a powerful tool for quantifying anthropogenic impacts on biodiversity, especially for echolocating bat species. To better assess bat population trends there is a critical need for accurate, reliable, and open source tools that allow the detection and classification of bat calls in large collections of audio recordings. The majority of existing tools are commercial or have focused on the species classification task, neglecting the important problem of first localizing echolocation calls in audio which is particularly problematic in noisy recordings.We developed a convolutional neural network (CNN) based open-source pipeline for detecting ultrasonic, full-spectrum, search-phase calls produced by echolocating bats (BatDetect). Our deep learning algorithms (CNN FULL and CNN FAST) were trained on full-spectrum ultrasonic audio collected along road-transects across Romania and Bulgaria by citizen scientists as part of the iBats programme and labelled by users of www.batdetective.org. We compared the performance of our system to other algorithms and commercial systems on expert verified test datasets recorded from different sensors and countries. As an example application, we ran our detection pipeline on iBats monitoring data collected over five years from Jersey (UK), and compared results to a widely-used commercial system.Here, we show that both CNNFULL and CNNFAST deep learning algorithms have a higher detection performance (average precision, and recall) of search-phase echolocation calls with our test sets, when compared to other existing algorithms and commercial systems tested. Precision scores for commercial systems were reasonably good across all test datasets (>0.7), but this was at the expense of recall rates. In particular, our deep learning approaches were better at detecting calls in road-transect data, which contained more noisy recordings. Our comparison of CNNFULL and CNNFAST algorithms was favourable, although CNNFAST had a slightly poorer performance, displaying a trade-off between speed and accuracy. Our example monitoring application demonstrated that our open-source, fully automatic, BatDetect CNNFAST pipeline does as well or better compared to a commercial system with manual verification previously used to analyse monitoring data.We show that it is possible to both accurately and automatically detect bat search-phase echolocation calls, particularly from noisy audio recordings. Our detection pipeline enables the automatic detection and monitoring of bat populations, and further facilitates their use as indicator species on a large scale, particularly when combined with automatic species identification. We release our system and datasets to encourage future progress and transparency.


2020 ◽  
Vol 2 ◽  
pp. 58-61 ◽  
Author(s):  
Syed Junaid ◽  
Asad Saeed ◽  
Zeili Yang ◽  
Thomas Micic ◽  
Rajesh Botchu

The advances in deep learning algorithms, exponential computing power, and availability of digital patient data like never before have led to the wave of interest and investment in artificial intelligence in health care. No radiology conference is complete without a substantial dedication to AI. Many radiology departments are keen to get involved but are unsure of where and how to begin. This short article provides a simple road map to aid departments to get involved with the technology, demystify key concepts, and pique an interest in the field. We have broken down the journey into seven steps; problem, team, data, kit, neural network, validation, and governance.


Author(s):  
Yuejun Liu ◽  
Yifei Xu ◽  
Xiangzheng Meng ◽  
Xuguang Wang ◽  
Tianxu Bai

Background: Medical imaging plays an important role in the diagnosis of thyroid diseases. In the field of machine learning, multiple dimensional deep learning algorithms are widely used in image classification and recognition, and have achieved great success. Objective: The method based on multiple dimensional deep learning is employed for the auxiliary diagnosis of thyroid diseases based on SPECT images. The performances of different deep learning models are evaluated and compared. Methods: Thyroid SPECT images are collected with three types, they are hyperthyroidism, normal and hypothyroidism. In the pre-processing, the region of interest of thyroid is segmented and the amount of data sample is expanded. Four CNN models, including CNN, Inception, VGG16 and RNN, are used to evaluate deep learning methods. Results: Deep learning based methods have good classification performance, the accuracy is 92.9%-96.2%, AUC is 97.8%-99.6%. VGG16 model has the best performance, the accuracy is 96.2% and AUC is 99.6%. Especially, the VGG16 model with a changing learning rate works best. Conclusion: The standard CNN, Inception, VGG16, and RNN four deep learning models are efficient for the classification of thyroid diseases with SPECT images. The accuracy of the assisted diagnostic method based on deep learning is higher than that of other methods reported in the literature.


2021 ◽  
Vol 35 ◽  
pp. 100825
Author(s):  
Mahdi Panahi ◽  
Khabat Khosravi ◽  
Sajjad Ahmad ◽  
Somayeh Panahi ◽  
Salim Heddam ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document