zero crossings
Recently Published Documents


TOTAL DOCUMENTS

270
(FIVE YEARS 30)

H-INDEX

31
(FIVE YEARS 3)

2022 ◽  
Vol 14 (2) ◽  
pp. 283
Author(s):  
Biao Qi ◽  
Longxu Jin ◽  
Guoning Li ◽  
Yu Zhang ◽  
Qiang Li ◽  
...  

This study based on co-occurrence analysis shearlet transform (CAST) effectively combines the latent low rank representation (LatLRR) and the regularization of zero-crossing counting in differences to fuse the heterogeneous images. First, the source images are decomposed by CAST method into base-layer and detail-layer sub-images. Secondly, for the base-layer components with larger-scale intensity variation, the LatLRR, is a valid method to extract the salient information from image sources, and can be applied to generate saliency map to implement the weighted fusion of base-layer images adaptively. Meanwhile, the regularization term of zero crossings in differences, which is a classic method of optimization, is designed as the regularization term to construct the fusion of detail-layer images. By this method, the gradient information concealed in the source images can be extracted as much as possible, then the fusion image owns more abundant edge information. Compared with other state-of-the-art algorithms on publicly available datasets, the quantitative and qualitative analysis of experimental results demonstrate that the proposed method outperformed in enhancing the contrast and achieving close fusion result.


2021 ◽  
Vol 11 (17) ◽  
pp. 8084
Author(s):  
Eric Ballestero ◽  
Brian Hamilton ◽  
Noé Jiménez ◽  
Vicent Romero-García ◽  
Jean-Philippe Groby ◽  
...  

Most simulations involving metamaterials often require complex physics to be solved through refined meshing grids. However, it can prove challenging to simulate the effect of local physical conditions created by said metamaterials into much wider computing sceneries due to the increased meshing load. We thus present in this work a framework for simulating complex structures with detailed geometries, such as metamaterials, into large Finite-Difference Time-Domain (FDTD) computing environments by reducing them to their equivalent surface impedance represented by a parallel-series RLC circuit. This reduction helps to simplify the physics involved as well as drastically reducing the meshing load of the model and the implicit calculation time. Here, an emphasis is made on scattering comparisons between an acoustic metamaterial and its equivalent surface impedance through analytical and numerical methods. Additionally, the problem of fitting RLC parameters to complex impedance data obtained from transfer matrix models is herein solved using a novel approach based on zero crossings of admittance phase derivatives. Despite the simplification process, the proposed framework achieves good overall results with respect to the original acoustic scatterer while ensuring relatively short simulation times over a vast range of frequencies.


2021 ◽  
Vol 263 (5) ◽  
pp. 1794-1803
Author(s):  
Michal Luczynski ◽  
Stefan Brachmanski ◽  
Andrzej Dobrucki

This paper presents a method for identifying tonal signal parameters using zero crossing detection. The signal parameters: frequency, amplitude and phase can change slowly in time. The described method allows to obtain accurate detection using possibly small number of signal samples. The detection algorithm consists of the following steps: frequency filtering, zero crossing detection and parameter reading. Filtering of the input signal is aimed at obtaining a signal consisting of a single tonal component. Zero crossing detection allows the elimination of multiple random zero crossings, which do not occur in a pure sine wave signal. The frequency is based on the frequency of transitions through zero, the amplitude is the largest value of the signal in the analysed time interval, and the initial phase is derived from the moment at which the transition through zero occurs. The obtained parameters were used to synthesise a compensation signal in an active tonal component reduction algorithm. The results of the algorithm confirmed the high efficiency of the method.


2021 ◽  
Vol 33 (8) ◽  
pp. 085121
Author(s):  
Zhanqi Tang ◽  
Ziye Fan ◽  
Letian Chen ◽  
Nan Jiang

2021 ◽  
Vol 15 (1) ◽  
pp. 45-57
Author(s):  
Abdolkarim Saeedi ◽  
Mohammad Karimi Moridani ◽  
Alireza Azizi

Cardiovascular is arguably the most dominant death cause in the world. Heart functionality can be measured in various ways. Heart sounds are usually inspected in these experiments as they can unveil a variety of heart related diseases. This study tackles the lack of reliable models and high training times on a publicly available dataset. The heart sound set is provided by Physionet consisting of 3153 recordings, from which five seconds were fixed to evaluate to the developed method. In this work, we propose a novel method based on feature reduction combination, using Genetic Algorithm (GA) and Principal Component Analysis (PCA). The authors present eight dominant features in heart sound classification: mean duration of systole interval, the standard deviation of diastole interval, the absolute amplitude ratio of diastole to S2, S1 to systole and S1 to diastole, zero crossings, Centroid to Centroid distance (CCdis) and mean power in the 95–295 Hz range. These reduced features are then optimized respectively with two straightforward classification algorithms weighted k-NN with a lower-dimensional feature space and Linear SVM that uses a linear combination of all features to create a robust model, acquiring up to 98.15% accuracy, holding the best stats in the heart sound classification on a largely used dataset. According to the experiments done in this study, the developed method can be further explored for real world heart sound assessments.


2021 ◽  
Author(s):  
Irene Brox Nilsen ◽  
Inger Hanssen-Bauer ◽  
Ole Einar Tveito ◽  
Wai Kwok Wong

<p>This presentation describes projected changes in the number of days with zero-crossings (DZCs) for Norway, that is, a day where the maximum temperature exceeds 0 °C and the minimum temperature drops below 0 °C, as an example of how the Norwegian Centre for Climate Services disseminates climate information to various user groups. Changes in DZCs have been requested by several user groups in Norway, for instance by agriculture and the transport sector. <br>A cold bias was detected in the regional climate model ensemble for Norway (here: EURO-CORDEX), which highlighted the need for bias-adjusting temperature fields before analyses. This is important for any index that is dependent on a fixed temperature threshold, not only the given index DZCs.<br>Gridded projections of changes in DZCs were produced for the period 2071–2100 relative to 1971–2000 under RCP4.5 and RCP8.5, at a 1 × 1 km resolution. The projections have been made publicly available at the Norwegian Centre for Climate Services' website https://klimaservicesenter.no. Results show that in regions and seasons that are mild, the number of DZCs is thus projected to decrease. This decrease was found for lowland regions in spring and coastal regions in winter. In regions and seasons that are cold, the number of DZCs is projected to give more frequent crossings of the 0 °C threshold. This increase was found for inland regions in winter and the northernmost county, Finnmark, in spring. Thus, more frequent icing of the snowpack is expected in Finnmark. This information can be used by the transport sector (e.g. winter road maintenance) and agriculture (e.g. reindeer herders) in the relevant regions. The Norwegian Centre for Climate Services disseminates information through fact-sheets, web-based maps and downloadable files.</p>


2021 ◽  
Vol 502 (3) ◽  
pp. 4405-4425
Author(s):  
H T J Bevins ◽  
W J Handley ◽  
A Fialkov ◽  
E de Lera Acedo ◽  
L J Greenhill ◽  
...  

ABSTRACT Maximally Smooth Functions (MSFs) are a form of constrained functions in which there are no inflection points or zero crossings in high-order derivatives. Consequently, they have applications to signal recovery in experiments where signals of interest are expected to be non-smooth features masked by larger smooth signals or foregrounds. They can also act as a powerful tool for diagnosing the presence of systematics. The constrained nature of MSFs makes fitting these functions a non-trivial task. We introduce maxsmooth, an open-source package that uses quadratic programming to rapidly fit MSFs. We demonstrate the efficiency and reliability of maxsmooth by comparison to commonly used fitting routines and show that we can reduce the fitting time by approximately two orders of magnitude. We introduce and implement with maxsmooth Partially Smooth Functions, which are useful for describing elements of non-smooth structure in foregrounds. This work has been motivated by the problem of foreground modelling in 21-cm cosmology. We discuss applications of maxsmooth to 21-cm cosmology and highlight this with examples using data from the Experiment to Detect the Global Epoch of Reionization Signature (EDGES) and the Large-aperture Experiment to Detect the Dark Ages (LEDA) experiments. We demonstrate the presence of a sinusoidal systematic in the EDGES data with a log-evidence difference of 86.19 ± 0.12 when compared to a pure foreground fit. MSFs are applied to data from LEDA for the first time in this paper and we identify the presence of sinusoidal systematics. maxsmooth is pip installable and available for download at https://github.com/htjb/maxsmooth.


Author(s):  
Hamid Bentarzi ◽  
Abderrahmane Ouadi

Many models of phasor measurement units (PMU) have been implemented; however, few dynamic models have been developed when the power system parameters change. It is necessary to use a method that can somehow estimate the frequency and correct the phasors. The conventional way to determine frequency is to detect zero crossings per unit time. However, this method has many drawbacks such as high cost and low accuracy. Also, after the frequency determination, the phasor should be corrected by suitably modifying the algorithm without omitting any data. This chapter presents different estimation techniques such as discrete Fourier transform (DFT), smart discrete Fourier transform (SDFT) that may be used to estimate the phasors. These estimated values would be incorrect if the input signals are at an off-nominal frequency and the phase angles would drift away from the true values. To correct this issue, first of all, the off-nominal frequency has been estimated using different techniques such as least error squares and phasor measurement angle changing, and then it is used to correct the phasors.


Author(s):  
Hamzeh Sadeghisorkhani ◽  
Ólafur Gudmundsson ◽  
Ka Lok Li ◽  
Ari Tryggvason ◽  
Björn Lund ◽  
...  

Summary Rayleigh-wave phase-velocity tomography of southern Sweden is presented using ambient seismic noise at 36 stations (630 station pairs) of the Swedish National Seismic Network (SNSN). We analyze one year (2012) of continuous recordings to get the first crustal image based on the ambient-noise method in the area. Time-domain cross-correlations of the vertical component between the stations are computed. Phase-velocity dispersion curves are measured in the frequency domain by matching zero crossings of the real spectra of cross-correlations to the zero crossings of the zeroth-order Bessel function of the first kind. We analyze the effect of uneven source distributions on the phase-velocity dispersion curves and correct for the estimated velocity bias before tomography. To estimate the azimuthal source distribution to determine the bias, we perform inversions of amplitudes of cross-correlation envelopes in a number of period ranges. Then, we invert the measured and bias-corrected dispersion curves for phase-velocity maps at periods between 3 and 30 s. In addition, we investigate the effects of phase-velocity bias corrections on the inverted tomographic maps. The difference between bias corrected and uncorrected phase-velocity maps is small ($< 1.2 \%$), but the correction significantly reduces the residual data variance at long periods where the bias is biggest. To obtain a shear velocity model, we invert for a one-dimensional velocity profile at each geographical node. The results show some correlation with surface geology, regional seismicity and gravity anomalies in the upper crust. Below the upper crust, the results agree well with results from other seismological methods.


Geophysics ◽  
2020 ◽  
pp. 1-52
Author(s):  
Shangsheng Yan ◽  
Xinming Wu

Horizon picking is a fundamental and crucial step for seismic interpretation, but it remains a time-consuming task. Although various automatic methods have been developed to extract horizons in seismic images, most of them may fail to pick horizons across discontinuities such as faults and noise. To obtain more accurate horizons, we propose a dynamic programming algorithm to efficiently refine manually or automatically extracted horizons so that they can more accurately track reflectors across discontinuities, follow consistent phases, and reveal more geologic details. In this method, we first compute an initial horizon using an automatic method, manual picking, or interpolation with several control points. The initial horizon may not be accurate and only needs to follow the general trend of the target horizon. Then, we extract a sub-volume of amplitudes centered at the initial horizon and meanwhile flatten the sub-volume according to the initial horizon. We finally use the dynamic programming to efficiently pick the globally optimal path that passes through global maximum or minimum amplitudes in the sub-volume. As a result, we are able to refine the initial horizon to a more accurate horizon that follows consistent amplitude peaks, troughs, or zero-crossings. As our method does not strictly depend on the initial horizon, we prefer to directly interpolate an initial horizon from a limited number of control points, which is computationally more efficient than automatically or manually picking an initial horizon. In addition, our method is convenient to be interactively implemented to update the horizon while editing or moving the control points. More importantly, these control points are not required to be exactly placed on the target horizon, which makes the human interaction highly convenient and efficient. We demonstrate our method with multiple 2D and 3D field examples that are complicated by noise, faults, and salt bodies.


Sign in / Sign up

Export Citation Format

Share Document