scholarly journals Bolt-Loosening Monitoring Framework Using an Image-Based Deep Learning and Graphical Model

Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3382 ◽  
Author(s):  
Hai Chien Pham ◽  
Quoc-Bao Ta ◽  
Jeong-Tae Kim ◽  
Duc-Duy Ho ◽  
Xuan-Linh Tran ◽  
...  

In this study, we investigate a novel idea of using synthetic images of bolts which are generated from a graphical model to train a deep learning model for loosened bolt detection. Firstly, a framework for bolt-loosening detection using image-based deep learning and computer graphics is proposed. Next, the feasibility of the proposed framework is demonstrated through the bolt-loosening monitoring of a lab-scaled bolted joint model. For practicality, the proposed idea is evaluated on the real-scale bolted connections of a historical truss bridge in Danang, Vietnam. The results show that the deep learning model trained by the synthesized images can achieve accurate bolt recognitions and looseness detections. The proposed methodology could help to reduce the time and cost associated with the collection of high-quality training data and further accelerate the applicability of vision-based deep learning models trained on synthetic data in practice.

2021 ◽  
Vol 13 (10) ◽  
pp. 2003
Author(s):  
Daeyong Jin ◽  
Eojin Lee ◽  
Kyonghwan Kwon ◽  
Taeyun Kim

In this study, we used convolutional neural networks (CNNs)—which are well-known deep learning models suitable for image data processing—to estimate the temporal and spatial distribution of chlorophyll-a in a bay. The training data required the construction of a deep learning model acquired from the satellite ocean color and hydrodynamic model. Chlorophyll-a, total suspended sediment (TSS), visibility, and colored dissolved organic matter (CDOM) were extracted from the satellite ocean color data, and water level, currents, temperature, and salinity were generated from the hydrodynamic model. We developed CNN Model I—which estimates the concentration of chlorophyll-a using a 48 × 27 sized overall image—and CNN Model II—which uses a 7 × 7 segmented image. Because the CNN Model II conducts estimation using only data around the points of interest, the quantity of training data is more than 300 times larger than that of CNN Model I. Consequently, it was possible to extract and analyze the inherent patterns in the training data, improving the predictive ability of the deep learning model. The average root mean square error (RMSE), calculated by applying CNN Model II, was 0.191, and when the prediction was good, the coefficient of determination (R2) exceeded 0.91. Finally, we performed a sensitivity analysis, which revealed that CDOM is the most influential variable in estimating the spatiotemporal distribution of chlorophyll-a.


2021 ◽  
Author(s):  
J. Annrose ◽  
N. Herald Anantha Rufus ◽  
C. R. Edwin Selva Rex ◽  
D. Godwin Immanuel

Abstract Bean which is botanically called Phaseolus vulgaris L belongs to the Fabaceae family.During bean disease identification, unnecessary economical losses occur due to the delay of the treatment period, incorrect treatment, and lack of knowledge. The existing deep learning and machine learning techniques met few issues such as high computational complexity, higher cost associated with the training data, more execution time, noise, feature dimensionality, lower accuracy, low speed, etc. To tackle these problems, we have proposed a hybrid deep learning model with an Archimedes optimization algorithm (HDL-AOA) for bean disease classification. In this work, there are five bean classes of which one is a healthy class whereas the remaining four classes indicate different diseases such as Bean halo blight, Pythium diseases, Rhizoctonia root rot, and Anthracnose abnormalities acquired from the Soybean (Large) Data Set.The hybrid deep learning technique is the combination of wavelet packet decomposition (WPD) and long short term memory (LSTM). Initially, the WPD decomposes the input images into four sub-series. For these sub-series, four LSTM networks were developed. During bean disease classification, an Archimedes optimization algorithm (AOA) enhances the classification accuracy for multiple single LSTM networks. MATLAB software implements the HDL-AOA model for bean disease classification. The proposed model accomplishes lower MAPE than other exiting methods. Finally, the proposed HDL-AOA model outperforms excellent classification results using different evaluation measures such as accuracy, specificity, sensitivity, precision, recall, and F-score.


2019 ◽  
Author(s):  
Mojtaba Haghighatlari ◽  
Gaurav Vishwakarma ◽  
Mohammad Atif Faiz Afzal ◽  
Johannes Hachmann

<div><div><div><p>We present a multitask, physics-infused deep learning model to accurately and efficiently predict refractive indices (RIs) of organic molecules, and we apply it to a library of 1.5 million compounds. We show that it outperforms earlier machine learning models by a significant margin, and that incorporating known physics into data-derived models provides valuable guardrails. Using a transfer learning approach, we augment the model to reproduce results consistent with higher-level computational chemistry training data, but with a considerably reduced number of corresponding calculations. Prediction errors of machine learning models are typically smallest for commonly observed target property values, consistent with the distribution of the training data. However, since our goal is to identify candidates with unusually large RI values, we propose a strategy to boost the performance of our model in the remoter areas of the RI distribution: We bias the model with respect to the under-represented classes of molecules that have values in the high-RI regime. By adopting a metric popular in web search engines, we evaluate our effectiveness in ranking top candidates. We confirm that the models developed in this study can reliably predict the RIs of the top 1,000 compounds, and are thus able to capture their ranking. We believe that this is the first study to develop a data-derived model that ensures the reliability of RI predictions by model augmentation in the extrapolation region on such a large scale. These results underscore the tremendous potential of machine learning in facilitating molecular (hyper)screening approaches on a massive scale and in accelerating the discovery of new compounds and materials, such as organic molecules with high-RI for applications in opto-electronics.</p></div></div></div>


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Sunil Kumar Prabhakar ◽  
Dong-Ok Won

To unlock information present in clinical description, automatic medical text classification is highly useful in the arena of natural language processing (NLP). For medical text classification tasks, machine learning techniques seem to be quite effective; however, it requires extensive effort from human side, so that the labeled training data can be created. For clinical and translational research, a huge quantity of detailed patient information, such as disease status, lab tests, medication history, side effects, and treatment outcomes, has been collected in an electronic format, and it serves as a valuable data source for further analysis. Therefore, a huge quantity of detailed patient information is present in the medical text, and it is quite a huge challenge to process it efficiently. In this work, a medical text classification paradigm, using two novel deep learning architectures, is proposed to mitigate the human efforts. The first approach is that a quad channel hybrid long short-term memory (QC-LSTM) deep learning model is implemented utilizing four channels, and the second approach is that a hybrid bidirectional gated recurrent unit (BiGRU) deep learning model with multihead attention is developed and implemented successfully. The proposed methodology is validated on two medical text datasets, and a comprehensive analysis is conducted. The best results in terms of classification accuracy of 96.72% is obtained with the proposed QC-LSTM deep learning model, and a classification accuracy of 95.76% is obtained with the proposed hybrid BiGRU deep learning model.


2021 ◽  
Author(s):  
J. Annrose ◽  
N. Herald Anantha Rufus ◽  
C. R. Edwin Selva Rex ◽  
D. Godwin Immanuel

Abstract Bean which is botanically called Phaseolus vulgaris L belongs to the Fabaceae family.During bean disease identification, unnecessary economical losses occur due to the delay of the treatment period, incorrect treatment, and lack of knowledge. The existing deep learning and machine learning techniques met few issues such as high computational complexity, higher cost associated with the training data, more execution time, noise, feature dimensionality, lower accuracy, low speed, etc. To tackle these problems, we have proposed a hybrid deep learning model with an Archimedes optimization algorithm (HDL-AOA) for bean disease classification. In this work, there are five bean classes of which one is a healthy class whereas the remaining four classes indicate different diseases such as Bean halo blight, Pythium diseases, Rhizoctonia root rot, and Anthracnose abnormalities acquired from the Soybean (Large) Data Set.The hybrid deep learning technique is the combination of wavelet packet decomposition (WPD) and long short term memory (LSTM). Initially, the WPD decomposes the input images into four sub-series. For these sub-series, four LSTM networks were developed. During bean disease classification, an Archimedes optimization algorithm (AOA) enhances the classification accuracy for multiple single LSTM networks. MATLAB software implements the HDL-AOA model for bean disease classification. The proposed model accomplishes lower MAPE than other exiting methods. Finally, the proposed HDL-AOA model outperforms excellent classification results using different evaluation measures such as accuracy, specificity, sensitivity, precision, recall, and F-score.


2020 ◽  
Vol 2020 ◽  
pp. 1-9 ◽  
Author(s):  
Gyeong-Hoon Lee ◽  
Jeil Jo ◽  
Cheong Hee Park

Jamming is a form of electronic warfare where jammers radiate interfering signals toward an enemy radar, disrupting the receiver. The conventional method for determining an effective jamming technique corresponding to a threat signal is based on the library which stores the appropriate jamming method for signal types. However, there is a limit to the use of a library when a threat signal of a new type or a threat signal that has been altered differently from existing types is received. In this paper, we study two methods of predicting the appropriate jamming technique for a received threat signal using deep learning: using a deep neural network on feature values extracted manually from the PDW list and using long short-term memory (LSTM) which takes the PDW list as input. Using training data consisting of pairs of threat signals and corresponding jamming techniques, a deep learning model is trained which outputs jamming techniques for threat signal inputs. Training data are constructed based on the information in the library, but the trained deep learning model is used to predict jamming techniques for received threat signals without using the library. The prediction performance and time complexity of two proposed methods are compared. In particular, the ability to predict jamming techniques for unknown types of radar signals which are not used in the stage of training the model is analyzed.


2019 ◽  
Vol 9 (20) ◽  
pp. 4431 ◽  
Author(s):  
Jeonghoon Kwak ◽  
Yunsick Sung

Micro unmanned aircraft systems (micro UAS)-related technical research is important because micro UAS has the advantage of being able to perform missions remotely. When an omnidirectional camera is mounted, it captures all surrounding areas of the micro UAS. Normal field of view (NFoV) refers to a view presented as an image to a user in a 360-degree video. The 360-degree video is controlled using an end-to-end controls method to automatically provide the user with NFoVs without the user controlling the 360-degree video. When using the end-to-end controls method that controls 360-degree video, if there are various signals that control the 360-degree video, the training of the deep learning model requires a considerable amount of training data. Therefore, there is a need for a method of autonomously determining the signals to reduce the number of signals for controlling the 360-degree video. This paper proposes a method to autonomously determine the output to be used for end-to-end control-based deep learning model to control 360-degree video for micro UAS controllers. The output of the deep learning model to control 360-degree video is automatically determined using the K-means algorithm. Using a trained deep learning model, the user is presented with NFoVs in a 360-degree video. The proposed method was experimentally verified by providing NFoVs wherein the signals that control the 360-degree video were set by the proposed method and by user definition. The results of training the convolution neural network (CNN) model using the signals to provide NFoVs were compared, and the proposed method provided NFoVs similar to NFoVs of existing user with 24.4% more similarity compared to a user-defined approach.


2020 ◽  
Author(s):  
Daniel Galea ◽  
Bryan Lawrence ◽  
Julian Kunkel

&lt;p&gt;Finding and identifying important phenomena in large volumes of simulation data consumes time and resources. Deep Learning offers a route to improve speeds and costs. In this work we demonstrate the application of Deep Learning in identifying data which contains various classes of tropical cyclone. Our initial application is in re-analysis data, but the eventual goal is to use this system during numerical simulation to identify data of interest before writing it out.&lt;/p&gt;&lt;p&gt;A Deep Learning model has been developed to help identify data containing varying intensities of tropical cyclones. The model uses some convolutional layers to build up a pattern to look for, and a fully-connected classifier to predict whether a tropical cyclone is present in the input. Other techniques such as batch normalization and dropout were tested. The model was trained on a subset of the ERA-Interim dataset from the 1st of January 1979 until the 31st of July 2017, with the relevant labels obtained from the IBTrACS dataset. The model obtained an accuracy of 99.08% on a test set, which was a 20% subset of the original dataset.&amp;#160;&lt;/p&gt;&lt;p&gt;An advantage of this model is that it does not rely on thresholds set a priori, such as a minimum of sea level pressure, a maximum of vorticity or a measure of the depth and strength of deep convection, making it more objective than previous detection methods. Also, given that current methods follow non-trivial algorithms, the Deep Learning model is expected to have the advantage of being able to get the required prediction much quicker, making it viable to be implemented into an existing numerical simulation.&lt;/p&gt;&lt;p&gt;Most current methods also apply different thresholds for different basins (planetary regions). In principle, the globally trained model should avoid the necessity for such differences, however, it was found that while differing thresholds were not required, training data for specific regions was required to get similar accuracy when only individual basins were examined.&lt;/p&gt;&lt;p&gt;The existing version, with greater than 99% accuracy globally and around 91% when trained only on cases from the Western Pacific and Western Atlantic basins, has been trained on ERA-Interim data. The next steps with this work will involve assessing the suitability of the pre-trained model for different data, and deploying it within a running numerical simulation.&lt;/p&gt;


Sign in / Sign up

Export Citation Format

Share Document