Basic Concepts of Neural Networks and Deep Learning and Their Applications for Pipeline Damage Detection

Author(s):  
Sina Razvarz ◽  
Raheleh Jafari ◽  
Alexander Gegov
2021 ◽  
Vol 20 ◽  
pp. 153303382110163
Author(s):  
Danju Huang ◽  
Han Bai ◽  
Li Wang ◽  
Yu Hou ◽  
Lan Li ◽  
...  

With the massive use of computers, the growth and explosion of data has greatly promoted the development of artificial intelligence (AI). The rise of deep learning (DL) algorithms, such as convolutional neural networks (CNN), has provided radiation oncologists with many promising tools that can simplify the complex radiotherapy process in the clinical work of radiation oncology, improve the accuracy and objectivity of diagnosis, and reduce the workload, thus enabling clinicians to spend more time on advanced decision-making tasks. As the development of DL gets closer to clinical practice, radiation oncologists will need to be more familiar with its principles to properly evaluate and use this powerful tool. In this paper, we explain the development and basic concepts of AI and discuss its application in radiation oncology based on different task categories of DL algorithms. This work clarifies the possibility of further development of DL in radiation oncology.


2017 ◽  
Vol 32 (5) ◽  
pp. 361-378 ◽  
Author(s):  
Young-Jin Cha ◽  
Wooram Choi ◽  
Oral Büyüköztürk

Author(s):  
Michael Biehl

AbstractThe exchange of ideas between computer science and statistical physics has advanced the understanding of machine learning and inference significantly. This interdisciplinary approach is currently regaining momentum due to the revived interest in neural networks and deep learning. Methods borrowed from statistical mechanics complement other approaches to the theory of computational and statistical learning. In this brief review, we outline and illustrate some of the basic concepts. We exemplify the role of the statistical physics approach in terms of a particularly important contribution: the computation of typical learning curves in student teacher scenarios of supervised learning. Two, by now classical examples from the literature illustrate the approach: the learning of a linearly separable rule by a perceptron with continuous and with discrete weights, respectively. We address these prototypical problems in terms of the simplifying limit of stochastic training at high formal temperature and obtain the corresponding learning curves.


Author(s):  
JZT Sim ◽  
QW Fong ◽  
WM Huang ◽  
CH Tan

With the advent of artificial intelligence (AI), machines are increasingly being used to complete complicated tasks, yielding remarkable results. Machine learning (ML) is the most relevant subset of AI in medicine, which will soon become an integral part of our everyday practice. Therefore, physicians should acquaint themselves with ML and AI, and their role as an enabler rather than a competitor. Herein, we introduce basic concepts and terms used in AI and ML, and aim to demystify commonly used AI/ML algorithms such as learning methods including neural networks/deep learning, decision tree and application domain in computer vision and natural language processing through specific examples. We discuss how machines are already being used to augment the physician’s decision-making process, and postulate the potential impact of ML on medical practice and medical research based on its current capabilities and known limitations. Moreover, we discuss the feasibility of full machine autonomy in medicine.


2021 ◽  
Vol 11 (6) ◽  
pp. 2610
Author(s):  
Jongbin Won ◽  
Jong-Woong Park ◽  
Soojin Jang ◽  
Kyohoon Jin ◽  
Youngbin Kim

In the field of structural-health monitoring, vibration-based structural damage detection techniques have been practically implemented in recent decades for structural condition assessment. With the development of deep-learning networks that make automatic feature extraction and high classification accuracy possible, deep-learning-based structural damage detection has been gaining significant attention. The deep-learning neural networks come with fixed input and output size, and input data must be downsampled or cropped to the predetermined input size of the networks to obtain desired output of the network. However, the length of input data (i.e., sensing data) is associated with the excitation quality of a structure, adjusting the size of the input data while maintaining the excitation quality is critical to ensure high accuracy of the deep-learning-based structural damage detection. To address this issue, natural-excitation-technique-based data normalization and the use of 1-D convolutional neural networks for automated structural damage detection are presented. The presented approach converts input data to predetermined size using cross-correlation and uses convolutional network to extract damage-sensitive feature for automated structural damage identification. Numerical simulations were conducted on a simply supported beam model excited by random and traffic loadings, and the performance was validated under various scenarios. The proposed method successfully detected the location of damage on a beam under random and traffic loadings with accuracies of 99.90% and 99.20%, respectively.


1996 ◽  
Author(s):  
Kevin Napolitano ◽  
John Kosmatka

2020 ◽  
Author(s):  
Dean Sumner ◽  
Jiazhen He ◽  
Amol Thakkar ◽  
Ola Engkvist ◽  
Esben Jannik Bjerrum

<p>SMILES randomization, a form of data augmentation, has previously been shown to increase the performance of deep learning models compared to non-augmented baselines. Here, we propose a novel data augmentation method we call “Levenshtein augmentation” which considers local SMILES sub-sequence similarity between reactants and their respective products when creating training pairs. The performance of Levenshtein augmentation was tested using two state of the art models - transformer and sequence-to-sequence based recurrent neural networks with attention. Levenshtein augmentation demonstrated an increase performance over non-augmented, and conventionally SMILES randomization augmented data when used for training of baseline models. Furthermore, Levenshtein augmentation seemingly results in what we define as <i>attentional gain </i>– an enhancement in the pattern recognition capabilities of the underlying network to molecular motifs.</p>


Sign in / Sign up

Export Citation Format

Share Document