scholarly journals Machine Health Surveillance System by Using Deep Learning Sparse Autoencoder

Author(s):  
Faizan Ullah ◽  
Abdu Salam ◽  
Muhammad Abrar ◽  
Masood Ahmad ◽  
Fasee Ullah ◽  
...  

Abstract Deep learning is a rapidly growing research area having state of art achievement in various applications including but not limited to speech recognition, object recognition, machine translation, and image segmentation. In the current modern industrial manufacturing system, Machine Health Surveillance System (MHSS) is achieving increasing popularity because of the widespread availability of low cost sensors internet connectivity. Deep learning architecture gives useful tools to analyze and process these vast amounts of machinery data. In this paper, we review the latest deep learning techniques and their variant used for MHSS. We used Gearbox Fault Diagnosis dataset in this paper that contains the sets of vibration attributes recorded by SpectraQuest’s Gearbox Fault Diagnostics Simulator. In addition, the authors used the variant of auto encoders for feature extraction to achieve higher accuracy in machine health surveillance. The results showed that the bagging ensemble classifier based on voting techniques achieved 99% accuracy.

Cataract is a degenerative condition that, according to estimations, will rise globally. Even though there are various proposals about its diagnosis, there are remaining problems to be solved. This paper aims to identify the current situation of the recent investigations on cataract diagnosis using a framework to conduct the literature review with the intention of answering the following research questions: RQ1) Which are the existing methods for cataract diagnosis? RQ2) Which are the features considered for the diagnosis of cataracts? RQ3) Which is the existing classification when diagnosing cataracts? RQ4) And Which obstacles arise when diagnosing cataracts? Additionally, a cross-analysis of the results was made. The results showed that new research is required in: (1) the classification of “congenital cataract” and, (2) portable solutions, which are necessary to make cataract diagnoses easily and at a low cost.


2020 ◽  
Vol 137 ◽  
pp. 27-36 ◽  
Author(s):  
Zuria Bauer ◽  
Alejandro Dominguez ◽  
Edmanuel Cruz ◽  
Francisco Gomez-Donoso ◽  
Sergio Orts-Escolano ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2984
Author(s):  
Yue Mu ◽  
Tai-Shen Chen ◽  
Seishi Ninomiya ◽  
Wei Guo

Automatic detection of intact tomatoes on plants is highly expected for low-cost and optimal management in tomato farming. Mature tomato detection has been wildly studied, while immature tomato detection, especially when occluded with leaves, is difficult to perform using traditional image analysis, which is more important for long-term yield prediction. Therefore, tomato detection that can generalize well in real tomato cultivation scenes and is robust to issues such as fruit occlusion and variable lighting conditions is highly desired. In this study, we build a tomato detection model to automatically detect intact green tomatoes regardless of occlusions or fruit growth stage using deep learning approaches. The tomato detection model used faster region-based convolutional neural network (R-CNN) with Resnet-101 and transfer learned from the Common Objects in Context (COCO) dataset. The detection on test dataset achieved high average precision of 87.83% (intersection over union ≥ 0.5) and showed a high accuracy of tomato counting (R2 = 0.87). In addition, all the detected boxes were merged into one image to compile the tomato location map and estimate their size along one row in the greenhouse. By tomato detection, counting, location and size estimation, this method shows great potential for ripeness and yield prediction.


2020 ◽  
Vol 12 (22) ◽  
pp. 3836
Author(s):  
Carlos García Rodríguez ◽  
Jordi Vitrià ◽  
Oscar Mora

In recent years, different deep learning techniques were applied to segment aerial and satellite images. Nevertheless, state of the art techniques for land cover segmentation does not provide accurate results to be used in real applications. This is a problem faced by institutions and companies that want to replace time-consuming and exhausting human work with AI technology. In this work, we propose a method that combines deep learning with a human-in-the-loop strategy to achieve expert-level results at a low cost. We use a neural network to segment the images. In parallel, another network is used to measure uncertainty for predicted pixels. Finally, we combine these neural networks with a human-in-the-loop approach to produce correct predictions as if developed by human photointerpreters. Applying this methodology shows that we can increase the accuracy of land cover segmentation tasks while decreasing human intervention.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6494
Author(s):  
Jeremiah Abimbola ◽  
Daniel Kostrzewa ◽  
Pawel Kasprowski

This paper presents a thorough review of methods used in various research articles published in the field of time signature estimation and detection from 2003 to the present. The purpose of this review is to investigate the effectiveness of these methods and how they perform on different types of input signals (audio and MIDI). The results of the research have been divided into two categories: classical and deep learning techniques, and are summarized in order to make suggestions for future study. More than 110 publications from top journals and conferences written in English were reviewed, and each of the research selected was fully examined to demonstrate the feasibility of the approach used, the dataset, and accuracy obtained. Results of the studies analyzed show that, in general, the process of time signature estimation is a difficult one. However, the success of this research area could be an added advantage in a broader area of music genre classification using deep learning techniques. Suggestions for improved estimates and future research projects are also discussed.


Electronics ◽  
2021 ◽  
Vol 10 (21) ◽  
pp. 2666
Author(s):  
Ahmad Alzu’bi ◽  
Firas Albalas ◽  
Tawfik AL-Hadhrami ◽  
Lojin Bani Bani Younis ◽  
Amjad Bashayreh

A large number of intelligent models for masked face recognition (MFR) has been recently presented and applied in various fields, such as masked face tracking for people safety or secure authentication. Exceptional hazards such as pandemics and frauds have noticeably accelerated the abundance of relevant algorithm creation and sharing, which has introduced new challenges. Therefore, recognizing and authenticating people wearing masks will be a long-established research area, and more efficient methods are needed for real-time MFR. Machine learning has made progress in MFR and has significantly facilitated the intelligent process of detecting and authenticating persons with occluded faces. This survey organizes and reviews the recent works developed for MFR based on deep learning techniques, providing insights and thorough discussion on the development pipeline of MFR systems. State-of-the-art techniques are introduced according to the characteristics of deep network architectures and deep feature extraction strategies. The common benchmarking datasets and evaluation metrics used in the field of MFR are also discussed. Many challenges and promising research directions are highlighted. This comprehensive study considers a wide variety of recent approaches and achievements, aiming to shape a global view of the field of MFR.


2020 ◽  
Vol 32 ◽  
pp. 03024
Author(s):  
Salome Palani ◽  
Arya Kulkarni ◽  
Abishai Kochara ◽  
Kiruthika M

The study of using deep learning for detection of various thoracic diseases has been an active and challenging research area. Chest X-rays are currently the most common and globally used radiology practices for detecting thoracic diseases. Patients suffering from thoracic diseases need to take Chest X-Rays which are read by radiologists and a report is generated by them. However, today with the increase in the number of thoracic patients, a quick method to classify the disease and generate the report has become necessary. Also, patient history has to be considered for diagnosis. This paper offers a comparative study on the various deep learning techniques that can process chest x-rays and are capable of detecting the different thoracic diseases. Also, a technique has been proposed to classify 14 diseases namely Atelectasis, Cardiomegaly, Consolidation, Edema, Effusion, Emphysema, Fibrosis, Hernia, Infiltration, Mass, Nodule, Pneumonia, Pneumothorax, Pleural thickening based on the given X-rays using Residual Neural Network.


Author(s):  
A. V. N. Kameswari

Abstract: When humans see an image, their brain can easily tell what the image is about, but a computer cannot do it easily. Computer vision researchers worked on this a lot and they considered it impossible until now! With the advancement in Deep learning techniques, availability of huge datasets and computer power, we can build models that can generate captions for an image. Image Caption Generator is a popular research area of Deep Learning that deals with image understanding and a language description for that image. Generating well-formed sentences requires both syntactic and semantic understanding of the language. Being able to describe the content of an image using accurately formed sentences is a very challenging task, but it could also have a great impact, by helping visually impaired people better understand the content of images. The biggest challenge is most definitely being able to create a description that must capture not only the objects contained in an image, but also express how these objects relate to each other. This paper uses Flickr_8K dataset and Flickr8k_text folder that contains Flickr8k.token which is the main file of our dataset that contains image name and their respective caption separated by newline(“\n”). CNN is used for extracting features from the image. We will use the pre-trained model Xception. LSTM will use the information from CNN to help generate a description of the image. In our Flickr8k_text folder, we have Flickr_8k.trainImages.txt file that contains a list of 6000 images names that we will use for training. After CNN-LSTM model is defined we give an image file as parameter through command prompt for testing image caption generator and it generates the caption of an image and its accuracy is observed by calculating bleu score for generated and reference captions. Keywords: Image Caption Generator, Convolutional Neural Network, Long Short-Term Memory, Bleu score, Flickr_8K


2020 ◽  
Author(s):  
Robert Arntfield ◽  
Blake VanBerlo ◽  
Thamer Alaifan ◽  
Nathan Phelps ◽  
Matt White ◽  
...  

AbstractObjectivesLung ultrasound (LUS) is a portable, low cost respiratory imaging tool but is challenged by user dependence and lack of diagnostic specificity. It is unknown whether the advantages of LUS implementation could be paired with deep learning techniques to match or exceed human-level, diagnostic specificity among similar appearing, pathological LUS images.DesignA convolutional neural network was trained on LUS images with B lines of different etiologies. CNN diagnostic performance, as validated using a 10% data holdback set was compared to surveyed LUS-competent physicians.SettingTwo tertiary Canadian hospitals.Participants600 LUS videos (121,381 frames) of B lines from 243 distinct patients with either 1) COVID-19, Non-COVID acute respiratory distress syndrome (NCOVID) and 3) Hydrostatic pulmonary edema (HPE).ResultsThe trained CNN performance on the independent dataset showed an ability to discriminate between COVID (AUC 1.0), NCOVID (AUC 0.934) and HPE (AUC 1.0) pathologies. This was significantly better than physician ability (AUCs of 0.697, 0.704, 0.967 for the COVID, NCOVID and HPE classes, respectively), p < 0.01.ConclusionsA deep learning model can distinguish similar appearing LUS pathology, including COVID-19, that cannot be distinguished by humans. The performance gap between humans and the model suggests that subvisible biomarkers within ultrasound images could exist and multi-center research is merited.


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6451
Author(s):  
Nadia Nasri ◽  
Sergio Orts-Escolano ◽  
Miguel Cazorla

In recent years the advances in Artificial Intelligence (AI) have been seen to play an important role in human well-being, in particular enabling novel forms of human-computer interaction for people with a disability. In this paper, we propose a sEMG-controlled 3D game that leverages a deep learning-based architecture for real-time gesture recognition. The 3D game experience developed in the study is focused on rehabilitation exercises, allowing individuals with certain disabilities to use low-cost sEMG sensors to control the game experience. For this purpose, we acquired a novel dataset of seven gestures using the Myo armband device, which we utilized to train the proposed deep learning model. The signals captured were used as an input of a Conv-GRU architecture to classify the gestures. Further, we ran a live system with the participation of different individuals and analyzed the neural network’s classification for hand gestures. Finally, we also evaluated our system, testing it for 20 rounds with new participants and analyzed its results in a user study.


Sign in / Sign up

Export Citation Format

Share Document