scholarly journals UAVs in rail damage image diagnostics supported by deep-learning networks

2021 ◽  
Vol 11 (1) ◽  
pp. 339-348
Author(s):  
Piotr Bojarczak ◽  
Piotr Lesiak

Abstract The article uses images from Unmanned Aerial Vehicles (UAVs) for rail diagnostics. The main advantage of such a solution compared to traditional surveys performed with measuring vehicles is the elimination of decreased train traffic. The authors, in the study, limited themselves to the diagnosis of hazardous split defects in rails. An algorithm has been proposed to detect them with an efficiency rate of about 81% for defects not less than 6.9% of the rail head width. It uses the FCN-8 deep-learning network, implemented in the Tensorflow environment, to extract the rail head by image segmentation. Using this type of network for segmentation increases the resistance of the algorithm to changes in the recorded rail image brightness. This is of fundamental importance in the case of variable conditions for image recording by UAVs. The detection of these defects in the rail head is performed using an algorithm in the Python language and the OpenCV library. To locate the defect, it uses the contour of a separate rail head together with a rectangle circumscribed around it. The use of UAVs together with artificial intelligence to detect split defects is an important element of novelty presented in this work.

2019 ◽  
Vol 59 (1) ◽  
pp. 426
Author(s):  
James Lowell ◽  
Jacob Smith

The interpretation of key horizons on seismic data is an essential but time-consuming part of the subsurface workflow. This is compounded when surfaces need to be re-interpreted on variations of the same data, such as angle stacks, 4D data, or reprocessed data. Deep learning networks, which are a subset of machine learning, have the potential to automate this reinterpretation process, and significantly increase the efficiency of the subsurface workflow. This study investigates whether a deep learning network can learn from a single horizon interpretation in order to identify that event in a different version of the same data. The results were largely successful with the target horizon correctly identified in an alternative offset stack, and was correctly repositioned in areas where there was misalignment between the training data and the test data.


Author(s):  
Vijayarajan Rajangam ◽  
Sangeetha N. ◽  
Karthik R. ◽  
Kethepalli Mallikarjuna

Multimodal imaging systems assist medical practitioners in cost-effective diagnostic methods in clinical pathologies. Multimodal imaging of the same organ or the region of interest reveals complementing anatomical and functional details. Multimodal image fusion algorithms integrate complementary image details into a composite image that reduces clinician's time for effective diagnosis. Deep learning networks have their role in feature extraction for the fusion of multimodal images. This chapter analyzes the performance of a pre-trained VGG19 deep learning network that extracts features from the base and detail layers of the source images for constructing a weight map to fuse the source image details. Maximum and averaging fusion rules are adopted for base layer fusion. The performance of the fusion algorithm for multimodal medical image fusion is analyzed by peak signal to noise ratio, structural similarity index, fusion factor, and figure of merit. Performance analysis of the fusion algorithms is also carried out for the source images with the presence of impulse and Gaussian noise.


2020 ◽  
Vol 10 (18) ◽  
pp. 6502
Author(s):  
Shinjin Kang ◽  
Jong-in Choi

On the game screen, the UI interface provides key information for game play. A vision deep learning network exploits pure pixel information in the screen. Apart from this, if we separately extract the information provided by the UI interface and use it as an additional input value, we can enhance the learning efficiency of deep learning networks. To this end, by effectively segmenting UI interface components such as buttons, image icons, and gauge bars on the game screen, we should be able to separately analyze only the relevant images. In this paper, we propose a methodology that segments UI components in a game by using synthetic game images created on a game engine. We developed a tool that approximately detected the UI areas of an image in games on the game screen and generated a large amount of synthetic labeling data through this. By training this data on a Pix2Pix, we applied UI segmentation. The network trained in this way can segment the UI areas of the target game regardless of the position of the corresponding UI components. Our methodology can help analyze the game screen without applying data augmentation to the game screen. It can also help vision researchers who should extract semantic information from game image data.


Author(s):  
Ashwan A. Abdulmunem ◽  
Zinah Abdulridha Abutiheen ◽  
Hiba J. Aleqabie

Corona virus disease (COVID-19) has an incredible influence in the last few months. It causes thousands of deaths in round the world. This make a rapid research movement to deal with this new virus. As a computer science, many technical researches have been done to tackle with it by using image processing algorithms. In this work, we introduce a method based on deep learning networks to classify COVID-19 based on x-ray images. Our results are encouraging to rely on to classify the infected people from the normal. We conduct our experiments on recent dataset, Kaggle dataset of COVID-19 X-ray images and using ResNet50 deep learning network with 5 and 10 folds cross validation. The experiments results show that 5 folds gives effective results than 10 folds with accuracy rate 97.28%.


Author(s):  
Vinaitheerthan Renganathan

Abstract With the increase in volume of data and presence of structured and unstructured data in the biomedical filed, there is a need for building models which can handle complex & non-linear relations in the data and also predict and classify outcomes with higher accuracy. Deep learning models are one of such models which can handle complex and nonlinear data and are being increasingly used in the biomedical filed in the recent years. Deep learning methodology evolved from artificial neural networks which process the input data through multiple hidden layers with higher level of abstraction. Deep Learning networks are used in various fields such as image processing, speech recognition, fraud deduction, classification and prediction. Objectives of this paper is to provide an overview of Deep Learning Models and its application in the biomedical domain using R Statistical software Deep Learning concepts are illustrated by using the R statistical software package. X-ray Images from NIH datasets used to explain the prediction accuracy of the deep learning models. Deep Learning models helped to classify the outcomes under study with 91% accuracy. The paper provided an overview of Deep Learning Models, its types, its application in biomedical domain. This paper has shown the effect of deep learning network in classifying images into normal and disease with 91% accuracy with help of the R statistical package.


2021 ◽  
Vol 11 (13) ◽  
pp. 5880
Author(s):  
Paloma Tirado-Martin ◽  
Raul Sanchez-Reillo

Nowadays, Deep Learning tools have been widely applied in biometrics. Electrocardiogram (ECG) biometrics is not the exception. However, the algorithm performances rely heavily on a representative dataset for training. ECGs suffer constant temporal variations, and it is even more relevant to collect databases that can represent these conditions. Nonetheless, the restriction in database publications obstructs further research on this topic. This work was developed with the help of a database that represents potential scenarios in biometric recognition as data was acquired in different days, physical activities and positions. The classification was implemented with a Deep Learning network, BioECG, avoiding complex and time-consuming signal transformations. An exhaustive tuning was completed including variations in enrollment length, improving ECG verification for more complex and realistic biometric conditions. Finally, this work studied one-day and two-days enrollments and their effects. Two-days enrollments resulted in huge general improvements even when verification was accomplished with more unstable signals. EER was improved in 63% when including a change of position, up to almost 99% when visits were in a different day and up to 91% if the user experienced a heartbeat increase after exercise.


Diagnostics ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 1156
Author(s):  
Kang Hee Lee ◽  
Sang Tae Choi ◽  
Guen Young Lee ◽  
You Jung Ha ◽  
Sang-Il Choi

Axial spondyloarthritis (axSpA) is a chronic inflammatory disease of the sacroiliac joints. In this study, we develop a method for detecting bone marrow edema by magnetic resonance (MR) imaging of the sacroiliac joints and a deep-learning network. A total of 815 MR images of the sacroiliac joints were obtained from 60 patients diagnosed with axSpA and 19 healthy subjects. Gadolinium-enhanced fat-suppressed T1-weighted oblique coronal images were used for deep learning. Active sacroiliitis was defined as bone marrow edema, and the following processes were performed: setting the region of interest (ROI) and normalizing it to a size suitable for input to a deep-learning network, determining bone marrow edema using a convolutional-neural-network-based deep-learning network for individual MR images, and determining sacroiliac arthritis in subject examinations based on the classification results of individual MR images. About 70% of the patients and normal subjects were randomly selected for the training dataset, and the remaining 30% formed the test dataset. This process was repeated five times to calculate the average classification rate of the five-fold sets. The gradient-weighted class activation mapping method was used to validate the classification results. In the performance analysis of the ResNet18-based classification network for individual MR images, use of the ROI showed excellent detection performance of bone marrow edema with 93.55 ± 2.19% accuracy, 92.87 ± 1.27% recall, and 94.69 ± 3.03% precision. The overall performance was additionally improved using a median filter to reflect the context information. Finally, active sacroiliitis was diagnosed in individual subjects with 96.06 ± 2.83% accuracy, 100% recall, and 94.84 ± 3.73% precision. This is a pilot study to diagnose bone marrow edema by deep learning based on MR images, and the results suggest that MR analysis using deep learning can be a useful complementary means for clinicians to diagnose bone marrow edema.


Sign in / Sign up

Export Citation Format

Share Document