The impact of the tool effect on polar anisotropy parameters derived from sonic waveform data

Author(s):  
John J. Walsh
2021 ◽  
Vol 13 (5) ◽  
pp. 948
Author(s):  
Lei Cui ◽  
Ziti Jiao ◽  
Kaiguang Zhao ◽  
Mei Sun ◽  
Yadong Dong ◽  
...  

Clumping index (CI) is a canopy structural variable important for modeling the terrestrial biosphere, but its retrieval from remote sensing data remains one of the least reliable. The majority of regional or global CI products available so far were generated from multiangle optical reflectance data. However, these reflectance-based estimates have well-known limitations, such as the mere use of a linear relationship between the normalized difference hotspot and darkspot (NDHD) and CI, uncertainties in bidirectional reflectance distribution function (BRDF) models used to calculate the NDHD, and coarse spatial resolutions (e.g., hundreds of meters to several kilometers). To remedy these limitations and develop alternative methods for large-scale CI mapping, here we explored the use of spaceborne lidar—the Geoscience Laser Altimeter System (GLAS)—and proposed a semi-physical algorithm to estimate CI at the footprint level. Our algorithm was formulated to leverage the full vertical canopy profile information of the GLAS full-waveform data; it converted raw waveforms to forest canopy gap distributions and gap fractions of random canopies, which was used to estimate CI based on the radiative transfer theory and a revised Beer–Lambert model. We tested our algorithm over two areas in China—the Saihanba National Forest Park and Heilongjiang Province—and assessed its relative accuracies against field-measured CI and MODIS CI products. We found that reliable estimation of CI was possible only for GLAS waveforms with high signal-to-noise ratios (e.g., >65) and at gentle slopes (e.g., <12°). Our GLAS-based CI estimates for high-quality waveforms compared well to field-based CI (i.e., R2 = 0.72, RMSE = 0.07, and bias = 0.02), but they showed less correlation to MODIS CI (e.g., R2 = 0.26, RMSE = 0.12, and bias = 0.04). The difference highlights the impact of the scale effect in conducting comparisons of products with huge differences resolution. Overall, our analyses represent the first attempt to use spaceborne lidar to retrieve high-resolution forest CI and our algorithm holds promise for mapping CI globally.


Geophysics ◽  
2013 ◽  
Vol 78 (5) ◽  
pp. WC15-WC23 ◽  
Author(s):  
Sergius Dell ◽  
Anna Pronevich ◽  
Boris Kashtan ◽  
Dirk Gajewski

Diffractions play an important role in seismic processing because they can be used for high-resolution imaging and the analysis of subsurface properties like the velocity distribution. Until now, however, only isotropic media have been considered in diffraction imaging. We have developed a method wherein we derive an approximation for the diffraction response for a general 2D anisotropic medium. Our traveltime expression is formulated as a double-square-root equation that allows us to accurately and reliably describe diffraction traveltimes. The diffraction response depends on the ray velocity, which varies with angle and thus offset. To eliminate the angle dependency, we expand the ray velocity in a Taylor series around a reference ray. We choose the fastest ray of the diffraction response, i.e., the ray corresponding to the diffraction apex as the reference ray. Moreover, in an anisotropic medium, the location of the diffraction apex may be shifted with respect to the surface projection of the diffractor location. To properly approximate the diffraction response, we consider this shift. The proposed approximation depends on four independent parameters: the emergence angle of the fastest ray, the ray velocity along this ray, and the first- and second-order derivatives of the ray velocity with respect to the ray angle. These attributes can be determined from the data by a coherence analysis. For the special case of homogeneous media with polar anisotropy, we establish relations between anisotropy parameters and the parameters of the diffraction operator. Therefore, the stacking attributes of the new diffraction operator are suitable to determine anisotropy parameters from the data. Moreover, because diffractions provide a better illumination than reflections, they are particularly suited to analyze seismic anisotropy at the near offsets.


2020 ◽  
Vol 237 ◽  
pp. 08005
Author(s):  
Zhijie Zhang ◽  
Huan Xie ◽  
Xiaohua Tong ◽  
Binbin Li ◽  
Yunwen Li ◽  
...  

Filtering is an essential step in the denoising of satellite altimetry full waveform data, since any deformation and distortion in the shape of the waveform can cause errors in range estimation and further waveform decomposition will also be adversely affected. This paper evaluated comprehensive performance of the popular filtering approaches like Gaussian filter, Taubin filter, Wavelet filter, and EMD based filter by simulated waveform data and ICESat/GLAS waveform. Firstly, according to the principle of each filter, the optimal parameters of filtering algorithm by ergodic tests were selected, then the Gaussian function using Levenberg-Marquardt method was used to fit full waveform to exact waveform parameters (i.e. peak amplitude, position, and half-width). Thirdly, through comparing SNR, RMSE of the pre and post filtering simulation waveform, and the consistency ratio, the average error of peak amplitude, position, and half-width in each Gaussian components of the fitted simulation waveform, verified the effectiveness of these filters and analyzed their influence on decomposition accuracy. Both the simulation experiments and ICESat/GLAS experimental results suggested that the Taubin filter had superior performance with the lowest peak position error, which turns out it has advantage in full waveform denoising and contributes to better full waveform decomposition. However, it introduces more parameters needed to be selected. The self-adaptive EMD based approach has the highest consistency, which shows EMD-based method is more suitable for the denoising of satellite altimetry full waveform decomposition.


Geophysics ◽  
1999 ◽  
Vol 64 (5) ◽  
pp. 1502-1511 ◽  
Author(s):  
Xiaoming Tang ◽  
Raghu K. Chunduru

This study presents an effective technique for obtaining formation azimuthal shear‐wave anisotropy parameters from four‐component dipole acoustic array waveform data. The proposed technique utilizes the splitting of fast and slow principal flexural waves in an anisotropic formation. First, the principal waves are computed from the four‐component data using the dipole source orientation with respect to the fast shear‐wave polarization azimuth. Then, the fast and slow principal waves are compared for all possible receiver combinations in the receiver array to suppress noise effects. This constructs an objective function to invert the waveform data for anisotropy estimates. Finally, the anisotropy and the fast shear azimuth are simultaneously determined by finding the global minimum of the objective function. The waveform inversion procedure provides a reliable and robust method for obtaining formation anisotropy from four‐component dipole acoustic logging. Field data examples are used to demonstrate the application and features of the proposed technique. A comparison study using the new and conventional techniques shows that the new technique not only reduces the ambiguity in the fast azimuth determination but also improves the accuracy of the anisotropy estimate. Some basic quality indicators of the new technique, along with the anisotropy analysis results, are presented to demonstrate the practical application of the inversion technique.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Ruiqiang Liu ◽  
Songhui Li ◽  
Guoxin Zhang ◽  
Shenyou Song ◽  
Jianda Xin

Void defects can be easily generated between a steel plate and concrete joint due to the complicated internal structure of a sandwich-structured immersed tunnel (SSIT), which affect the overall bearing capacity of the main structure of the immersed tube tunnel. A prototype experiment was conducted to study the application of impact imaging method in the nondestructive detection of void defects in SSITs. The detection criterion for the impact imaging method was established based on the features of the waveform data. Nevertheless, the influence of steel plate thickness, material properties, void location, and structure on the detection accuracy of the impact imaging method is unclear. Therefore, numerical simulation was applied to study the influencing factors by establishing a different condition model. Good agreement between the experimental and numerical results was observed for the response waveform collected from the inspection area. Using the calculation model and identified material parameters validated in the active prototype experiment, numerical simulations of several sets, which considered all influencing factors, were performed. The application scope and sensitivity of the impact image method were recommended to reduce misjudgement in practical applications and improve detection accuracy.


2011 ◽  
Vol 45 (s1) ◽  
pp. 29-36 ◽  
Author(s):  
Brian Gross ◽  
Deborah Dahl ◽  
Larry Nielsen

Abstract It has been known to the public that high frequency of false and/or unnecessary alarms from patient monitoring devices causes “alarm fatigue” in critical care. But little is known about the impact to care on the less acute patients located outside the critical care areas, such as the traditional medical/surgical (med/surg) floor. METHODS: As part of a larger population management study, we initiated continuous physiological monitoring to 79 beds of floor patients in a community hospital. In order to qualify the patient monitoring alarm load for subacute medical and surgical floor patients, we assessed alarm data from April 2009 to January 2010. A standard critical care monitoring system (Philips IntelliVue MP-5 and Telemetry) was installed and set to the default alarm limits. All waveform data available for the patient (typically ECG, RESP, PPG at 125hz 8 bit), all alarm conditions declared by the monitoring system, and 1 minute parameter trend data were saved to disk every 8 hours for all patients. A monitoring care protocol was created to determine whether the patient was monitored via the hardwired bedside or wirelessly via telemetry. Alarms were not announced on the care unit but instead notifications were the responsibility of remote telehealth center personnel. We retrospectively evaluated the frequency of alarms over specific physiologic thresholds (n= 4104 patients) and conducted adjudication of all alarms based on a smaller sampling (n=30 patients). RESULTS: For all patients, the average hours of monitoring per patient were 16.5 hours with a standard deviation (s) of 8.3 hours and a median of 22 hours. The average number of alarms (all severities) per patient was 69.7 (s =90.3, median =28) alarms. When this is adjusted to the duration of monitoring, the average per patient, per day rate was 95.6 (s =124.2, median =34.2) alarms. The adjudicated sample (n=30 patients) resulted in 34% of critical alarms (lethal arrhythmias, extreme high or low heart rate [HR], extreme desaturation, apnea) being true and 63% of the high priority alarms (high or low HR, high or low RR, Low SpO2, pause, Missed Beat, Pair PVCs, Pacer Not Pace, Non Sustain VT, Irregular HR, Multiform) being true. Analysis of alarm history resulted in the ability to reduce the HR alarm load by more than 50% with a simple limit adjustment of high HR from 120 to 130 bpm and a 36% or 65% reduction in SpO2 alarm load by reducing the SpO2 limit from 90% to 85% or 80% respectively. CONCLUSION: 1) Standard critical care alarm limits appear be too sensitive for subacute care areas of the hospital. 2) For most patients these alarm limits do not create a significant alarm load; however, for a small number of patients they cause a significant alarm load. 3) Alarm loads can be controlled with alarm limit settings appropriate to the population. 4) Current technology for HR and SpO2 appear suitable for continuous monitoring of this population.


2021 ◽  
Vol 873 (1) ◽  
pp. 012022
Author(s):  
A W Baskara ◽  
D P Sahara ◽  
A D Nugraha ◽  
A Muhari ◽  
A A Rusdin ◽  
...  

Abstract The Ambon Mw 6.5 earthquake on September 26th, 2019, had contributed to give severe damages and significantly increased seismicity around Ambon Island and surrounding areas. Mainshock was followed by aftershocks with spatial distribution added to the impact of destructions in this region. We investigated aftershocks sequences to reveal the effect of mainshock toward the change in the in-situ stress field, including the possibility of the existing faults reactivation and the generation of aftershocks. We inferred centroid moment tensor (CMT) for significant aftershock events with Mw more than 4.0 using waveform data recorded from October 18th to December 15th, 2019. The aftershock focal mechanism was determined using the Bayesian full-waveform inversion code ISOLA-Obspy. This approach provides the uncertainty of the CMT model parameters. From ten CMT solution we had inferred in three seismic clusters, we found that majority of events have a strike-slip mechanism. Four events located on the south of the N-S trendings have a dextral strike-slip fault type, reflected the rupture of the mainshocks fault plane. Three events in the cluster of Ambon Island are dextral strike-slip, confirming the presence of the fault reactivation. Meanwhile, three CMT solutions in the north show the dextral strike-slip faulting and may belong to the mainshock main fault, connected with the cluster in the south.


2019 ◽  
Vol 11 (1) ◽  
Author(s):  
Sebastian Felix Wirtz ◽  
Stefan Bach ◽  
Dirk Söffker

Recently, acoustic emission-based damage classification schemes gained attention for health monitoring of composites. Here, the reliable detection of different micro-mechanical damage mechanisms is important because of the adverse effect on fatigue life. It is well known that classical parameters obtained from acoustic emission measurements in time domain are strongly dependent on the propagation path and testing conditions. However, signal attenuation, which can be observed due to geometric spreading, material-related damping, and dispersion, is typically neglected. Therefore, it is generally assumed that frequency domain features are reliable descriptors of damage due to invariance of peak frequencies to the propagation path. Based on this assumption, several data-driven approaches for damage detection are developed. However, in contrast to metallic materials, where low attenuation is observed, acoustic emission signals are strongly attenuated in polymer matrix composites due to viscoelastic behavior of the matrix. For instance, it is reported in literature that at high frequencies most of the acoustic emission signal energy is attenuated after a propagation distance of 250~mm. Therefore, new experimental results of acoustic emission attenuation in composites are presented in this paper. Particular focus is placed on the frequency dependence of acoustic emission attenuation and the effect of different loading conditions. The specimens are manufactured from aerospace material. Carbon fiber reinforced polymer plates are used as a typical specimen geometry. Different acoustic emission sources are considered and the related attenuation coefficients are determined. Furthermore, full waveform data are analyzed in time and time-frequency domain using wavelet transform. From the experimental results it can be concluded that consideration of wave propagation-related signal attenuation is important for the interpretation of acoustic emission measurements for health monitoring of composites. Consequently, the impact on the detectability of different physical damage mechanisms using data-driven classification approaches has to be considered.


Author(s):  
Wenhuan Kuang ◽  
Congcong Yuan ◽  
Jie Zhang

Abstract The stability and robustness of determining earthquake magnitude are of great significance in earthquake monitoring and seismic hazard assessment. The routine workflow of determining earthquake local magnitude, such as the widely used Richter magnitude, may result in an unreliable measurement of earthquake magnitude because it relies on individual amplitude measurement of a single station, which is prone to be influenced by natural impulsive noise or anthropogenic noise. In this study, we present an automated estimation of earthquake magnitude by applying a deep-learning algorithm named magnitude neural network (MagNet) based on the full-waveform recordings from a network of seismic stations at China seismic experimental site (CSES). The MagNet consists of a compression component that extracts the global features of waveform data and an expansion component that yields a Gaussian probability distribution representing the magnitude estimation. The MagNet is trained with an augmented data set, which includes 21,700 training samples with evenly distributed magnitudes. From the prediction results on the test data set, the mean errors and standard deviations are −0.017 and 0.21, respectively, for 600 moderate earthquakes with magnitudes ranging from 3 to 5.9, and −0.011 and 0.14, respectively, for 70 small earthquakes with magnitudes ranging from 2.3 to 3.5. However, it remains challenging for large earthquakes (magnitude&gt;6.5), due to the lack of sufficient historical large earthquakes as training data. In addition, testing results show that the new method is capable of minimizing the impact of abnormal noises in the data. These results demonstrate the validity and merits of the proposed deep-learning method in predicting earthquake magnitude automatically.


Sign in / Sign up

Export Citation Format

Share Document