scholarly journals Techniques for Evaluating the Depth of a Crack by Means of Laser Spot Thermography

Proceedings ◽  
2019 ◽  
Vol 27 (1) ◽  
pp. 20
Author(s):  
Gabriele Inglese ◽  
Agnese Scalbi ◽  
Roberto Olmi

Laser Spot Thermography is a useful tool in nondestructive crack detection. Our goal is to estimate the depth of a fracture from external thermal measurements. First we transform a set of real 3D data in a 2D effective one. Then we use the 2D data set as input in different methods for solving an inverse problems for the heat equation. Our guiding idea is that an effort in the direction of the mathematical analysis of the problem, rewards us in term of computational costs.

2003 ◽  
Vol 42 (05) ◽  
pp. 215-219
Author(s):  
G. Platsch ◽  
A. Schwarz ◽  
K. Schmiedehausen ◽  
B. Tomandl ◽  
W. Huk ◽  
...  

Summary: Aim: Although the fusion of images from different modalities may improve diagnostic accuracy, it is rarely used in clinical routine work due to logistic problems. Therefore we evaluated performance and time needed for fusing MRI and SPECT images using a semiautomated dedicated software. Patients, material and Method: In 32 patients regional cerebral blood flow was measured using 99mTc ethylcystein dimer (ECD) and the three-headed SPECT camera MultiSPECT 3. MRI scans of the brain were performed using either a 0,2 T Open or a 1,5 T Sonata. Twelve of the MRI data sets were acquired using a 3D-T1w MPRAGE sequence, 20 with a 2D acquisition technique and different echo sequences. Image fusion was performed on a Syngo workstation using an entropy minimizing algorithm by an experienced user of the software. The fusion results were classified. We measured the time needed for the automated fusion procedure and in case of need that for manual realignment after automated, but insufficient fusion. Results: The mean time of the automated fusion procedure was 123 s. It was for the 2D significantly shorter than for the 3D MRI datasets. For four of the 2D data sets and two of the 3D data sets an optimal fit was reached using the automated approach. The remaining 26 data sets required manual correction. The sum of the time required for automated fusion and that needed for manual correction averaged 320 s (50-886 s). Conclusion: The fusion of 3D MRI data sets lasted significantly longer than that of the 2D MRI data. The automated fusion tool delivered in 20% an optimal fit, in 80% manual correction was necessary. Nevertheless, each of the 32 SPECT data sets could be merged in less than 15 min with the corresponding MRI data, which seems acceptable for clinical routine use.


2018 ◽  
Author(s):  
Peter De Wolf ◽  
Zhuangqun Huang ◽  
Bede Pittenger

Abstract Methods are available to measure conductivity, charge, surface potential, carrier density, piezo-electric and other electrical properties with nanometer scale resolution. One of these methods, scanning microwave impedance microscopy (sMIM), has gained interest due to its capability to measure the full impedance (capacitance and resistive part) with high sensitivity and high spatial resolution. This paper introduces a novel data-cube approach that combines sMIM imaging and sMIM point spectroscopy, producing an integrated and complete 3D data set. This approach replaces the subjective approach of guessing locations of interest (for single point spectroscopy) with a big data approach resulting in higher dimensional data that can be sliced along any axis or plane and is conducive to principal component analysis or other machine learning approaches to data reduction. The data-cube approach is also applicable to other AFM-based electrical characterization modes.


Animals ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 50
Author(s):  
Jennifer Salau ◽  
Jan Henning Haas ◽  
Wolfgang Junge ◽  
Georg Thaller

Machine learning methods have become increasingly important in animal science, and the success of an automated application using machine learning often depends on the right choice of method for the respective problem and data set. The recognition of objects in 3D data is still a widely studied topic and especially challenging when it comes to the partition of objects into predefined segments. In this study, two machine learning approaches were utilized for the recognition of body parts of dairy cows from 3D point clouds, i.e., sets of data points in space. The low cost off-the-shelf depth sensor Microsoft Kinect V1 has been used in various studies related to dairy cows. The 3D data were gathered from a multi-Kinect recording unit which was designed to record Holstein Friesian cows from both sides in free walking from three different camera positions. For the determination of the body parts head, rump, back, legs and udder, five properties of the pixels in the depth maps (row index, column index, depth value, variance, mean curvature) were used as features in the training data set. For each camera positions, a k nearest neighbour classifier and a neural network were trained and compared afterwards. Both methods showed small Hamming losses (between 0.007 and 0.027 for k nearest neighbour (kNN) classification and between 0.045 and 0.079 for neural networks) and could be considered successful regarding the classification of pixel to body parts. However, the kNN classifier was superior, reaching overall accuracies 0.888 to 0.976 varying with the camera position. Precision and recall values associated with individual body parts ranged from 0.84 to 1 and from 0.83 to 1, respectively. Once trained, kNN classification is at runtime prone to higher costs in terms of computational time and memory compared to the neural networks. The cost vs. accuracy ratio for each methodology needs to be taken into account in the decision of which method should be implemented in the application.


2019 ◽  
pp. 81-88
Author(s):  
Mishio Kawashita ◽  
Yaroslav Kurylev ◽  
Hideo Soga

2021 ◽  
Author(s):  
Niloufar Nowrouzi ◽  
Lynn Kistler ◽  
Eric Lund ◽  
Kai Zhao

<p>Sawtooth events are repeated injections of energetic particles at geosynchronous orbit. Although studies have shown that 94% of sawtooth events occur during  magnetic storm times, the main factor that causes a sawtooth event is unknown. Simulations have suggested that heavy ions like O<sup>+</sup> may play a role in driving the sawtooth mode by increasing the magnetotail pressure and causing the magnetic tail to stretch. O<sup>+</sup> ions located in the nightside auroral region have a direct access to the near-earth plasma-sheet. O<sup>+</sup> in the dayside cusp can reach to the midtail plasma-sheet when the convection velocity is sufficiently strong. Whether the dayside or nightside source is more important is not known.</p><p>We show results of a statistical study of the variation of the O+ and H+ outflow flux during sawtooth events for SIR and ICME sawtooth events. We perform a superposed epoch analysis of the ion outflow using the TEAMS (Time-of-Flight Energy Angle Mass Spectrograph) instrument on the FAST spacecraft. TEAMS measures the ion composition over the energy range of 1 eV e<sup>-1</sup> to 12 keV e<sup>-1</sup>.  We have done major corrections and calibrations (producing 3D data set, anode calibration, mass classification, removing ram effect and incorporating dead time corrections) on TEAMS data and produced a data set for four data species (H<sup>+</sup>, O<sup>+</sup>, and He<sup>+</sup>). From 1996 to 2007, we have data for 133 orbits of CME-driven and for 103 orbits of SIR-driven sawtooth events with an altitude above 1500 km. We found that:</p><ul><li>the averaged O<sup>+</sup> outflow flux is more intense in the cusp dayside than in the nightside, before and after onset time.</li> <li><span>Before onset, an intense averaged outflow flux in the dawnside of CME events is seen. This outflow decreases after onset time.</span></li> <li><span>In both CME-driven and SIR-driven, the averaged O</span><sup>+</sup><span> outflow increases after onset time, in the nightside, cusp dayside. This increase is greater on the nightside than in the cusp.</span></li> </ul><p>We will develop this study by performing a similar statistical study for H<sup>+</sup> outflow and finally will compare the H<sup>+</sup> result with the O<sup>+ </sup>result.</p>


Big Data ◽  
2016 ◽  
pp. 261-287
Author(s):  
Keqin Wu ◽  
Song Zhang

While uncertainty in scientific data attracts an increasing research interest in the visualization community, two critical issues remain insufficiently studied: (1) visualizing the impact of the uncertainty of a data set on its features and (2) interactively exploring 3D or large 2D data sets with uncertainties. In this chapter, a suite of feature-based techniques is developed to address these issues. First, an interactive visualization tool for exploring scalar data with data-level, contour-level, and topology-level uncertainties is developed. Second, a framework of visualizing feature-level uncertainty is proposed to study the uncertain feature deviations in both scalar and vector data sets. With quantified representation and interactive capability, the proposed feature-based visualizations provide new insights into the uncertainties of both data and their features which otherwise would remain unknown with the visualization of only data uncertainties.


Author(s):  
Carlos Eduardo Thomaz ◽  
Vagner do Amaral ◽  
Gilson Antonio Giraldi ◽  
Edson Caoru Kitani ◽  
João Ricardo Sato ◽  
...  

This chapter describes a multi-linear discriminant method of constructing and quantifying statistically significant changes on human identity photographs. The approach is based on a general multivariate two-stage linear framework that addresses the small sample size problem in high-dimensional spaces. Starting with a 2D data set of frontal face images, the authors determine a most characteristic direction of change by organizing the data according to the patterns of interest. These experiments on publicly available face image sets show that the multi-linear approach does produce visually plausible results for gender, facial expression and aging facial changes in a simple and efficient way. The authors believe that such approach could be widely applied for modeling and reconstruction in face recognition and possibly in identifying subjects after a lapse of time.


2004 ◽  
Author(s):  
Olga M. Kosheleva ◽  
Sergio D. Cabrera ◽  
Bryan E. Usevitch ◽  
Alberto Aguirre ◽  
Edward Vidal, Jr.
Keyword(s):  
Bit Rate ◽  
Data Set ◽  

Sign in / Sign up

Export Citation Format

Share Document