scholarly journals Design and Optimization of ECG Modeling for Generating Different Cardiac Dysrhythmias

Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1638
Author(s):  
Md. Abdul Awal ◽  
Sheikh Shanawaz Mostafa ◽  
Mohiuddin Ahmad ◽  
Mohammad Ashik Alahe ◽  
Mohd Abdur Rashid ◽  
...  

The electrocardiogram (ECG) has significant clinical importance for analyzing most cardiovascular diseases. ECGs beat morphologies, beat durations, and amplitudes vary from subject to subject and diseases to diseases. Therefore, ECG morphology-based modeling has long-standing research interests. This work aims to develop a simplified ECG model based on a minimum number of parameters that could correctly represent ECG morphology in different cardiac dysrhythmias. A simple mathematical model based on the sum of two Gaussian functions is proposed. However, fitting more than one Gaussian function in a deterministic way has accuracy and localization problems. To solve these fitting problems, two hybrid optimization methods have been developed to select the optimal ECG model parameters. The first method is the combination of an approximation and global search technique (ApproxiGlo), and the second method is the combination of an approximation and multi-start search technique (ApproxiMul). The proposed model and optimization methods have been applied to real ECGs in different cardiac dysrhythmias, and the effectiveness of the model performance was measured in time, frequency, and the time-frequency domain. The model fit different types of ECG beats representing different cardiac dysrhythmias with high correlation coefficients (>0.98). Compared to the nonlinear fitting method, ApproxiGlo and ApproxiMul are 3.32 and 7.88 times better in terms of root mean square error (RMSE), respectively. Regarding optimization, the ApproxiMul performs better than the ApproxiGlo method in many metrics. Different uses of this model are possible, such as a syntactic ECG generator using a graphical user interface has been developed and tested. In addition, the model can be used as a lossy compression with a variable compression rate. A compression ratio of 20:1 can be achieved with 1 kHz sampling frequency and 75 beats per minute. These optimization methods can be used in different engineering fields where the sum of Gaussians is used.

Author(s):  
M. van der Schaar ◽  
E. Delory ◽  
J. van der Weide ◽  
C. Kamminga ◽  
J.C. Goold ◽  
...  

We tried to find discriminating features for sperm whale clicks in order to distinguish between clicks from different whales, or to enable unique identification. We examined two different methods to obtain suitable characteristics. First, a model based on the Gabor function was used to describe the dominant frequencies in a click, and then the model parameters were used as classification features. The Gabor function model was selected because it has been used to model dolphin sonar pulses with great precision. Additionally, it has the interesting property that it has an optimal time–frequency resolution. As such, it can indicate optimal usage of the sonar by sperm whales. Second, the clicks were expressed in a wavelet packet table, from which subsequently a local discriminant basis was created. A wavelet packet basis has the advantage that it offers a highly redundant number of coefficients, which allow signals to be represented in many different ways. From the redundant signal description a representation can be selected that emphasizes the differences between classes. This local discriminant basis is more flexible than the Gabor function, which can make it more suitable for classification, but it is also more complex. Class vectors were created with both models and classification was based on the distance of a click to these vectors. We show that the Gabor function could not model the sperm whale clicks very well, due to the variability of the changing click characteristics. Best performance was reached when three subsequent clicks were averaged to smoothen the variability. Around 70% of the clicks classified correctly in both the training and validation sets. The wavelet packet table adapted better to the changing characteristics, and gave better classification. Here, also using a 3-click moving average, around 95% of the training sets classified correctly and 78% of the validation sets. These numbers lowered by only a few per cent when single clicks, instead of a moving average, were classified. This indicates that, while the features may show too much variability to enable unique identification of individual whales on a click by click basis, the wavelet approach may be capable of distinguishing between a small group of whales.


Hydrology ◽  
2021 ◽  
Vol 8 (3) ◽  
pp. 102
Author(s):  
Frauke Kachholz ◽  
Jens Tränckner

Land use changes influence the water balance and often increase surface runoff. The resulting impacts on river flow, water level, and flood should be identified beforehand in the phase of spatial planning. In two consecutive papers, we develop a model-based decision support system for quantifying the hydrological and stream hydraulic impacts of land use changes. Part 1 presents the semi-automatic set-up of physically based hydrological and hydraulic models on the basis of geodata analysis for the current state. Appropriate hydrological model parameters for ungauged catchments are derived by a transfer from a calibrated model. In the regarded lowland river basins, parameters of surface and groundwater inflow turned out to be particularly important. While the calibration delivers very good to good model results for flow (Evol =2.4%, R = 0.84, NSE = 0.84), the model performance is good to satisfactory (Evol = −9.6%, R = 0.88, NSE = 0.59) in a different river system parametrized with the transfer procedure. After transferring the concept to a larger area with various small rivers, the current state is analyzed by running simulations based on statistical rainfall scenarios. Results include watercourse section-specific capacities and excess volumes in case of flooding. The developed approach can relatively quickly generate physically reliable and spatially high-resolution results. Part 2 builds on the data generated in part 1 and presents the subsequent approach to assess hydrologic/hydrodynamic impacts of potential land use changes.


2021 ◽  
Vol 11 (9) ◽  
pp. 3827
Author(s):  
Blazej Nycz ◽  
Lukasz Malinski ◽  
Roman Przylucki

The article presents the results of multivariate calculations for the levitation metal melting system. The research had two main goals. The first goal of the multivariate calculations was to find the relationship between the basic electrical and geometric parameters of the selected calculation model and the maximum electromagnetic buoyancy force and the maximum power dissipated in the charge. The second goal was to find quasi-optimal conditions for levitation. The choice of the model with the highest melting efficiency is very important because electromagnetic levitation is essentially a low-efficiency process. Despite the low efficiency of this method, it is worth dealing with it because is one of the few methods that allow melting and obtaining alloys of refractory reactive metals. The research was limited to the analysis of the electromagnetic field modeled three-dimensionally. From among of 245 variants considered in the article, the most promising one was selected characterized by the highest efficiency. This variant will be a starting point for further work with the use of optimization methods.


2021 ◽  
Vol 13 (12) ◽  
pp. 2405
Author(s):  
Fengyang Long ◽  
Chengfa Gao ◽  
Yuxiang Yan ◽  
Jinling Wang

Precise modeling of weighted mean temperature (Tm) is critical for realizing real-time conversion from zenith wet delay (ZWD) to precipitation water vapor (PWV) in Global Navigation Satellite System (GNSS) meteorology applications. The empirical Tm models developed by neural network techniques have been proved to have better performances on the global scale; they also have fewer model parameters and are thus easy to operate. This paper aims to further deepen the research of Tm modeling with the neural network, and expand the application scope of Tm models and provide global users with more solutions for the real-time acquisition of Tm. An enhanced neural network Tm model (ENNTm) has been developed with the radiosonde data distributed globally. Compared with other empirical models, the ENNTm has some advanced features in both model design and model performance, Firstly, the data for modeling cover the whole troposphere rather than just near the Earth’s surface; secondly, the ensemble learning was employed to weaken the impact of sample disturbance on model performance and elaborate data preprocessing, including up-sampling and down-sampling, which was adopted to achieve better model performance on the global scale; furthermore, the ENNTm was designed to meet the requirements of three different application conditions by providing three sets of model parameters, i.e., Tm estimating without measured meteorological elements, Tm estimating with only measured temperature and Tm estimating with both measured temperature and water vapor pressure. The validation work is carried out by using the radiosonde data of global distribution, and results show that the ENNTm has better performance compared with other competing models from different perspectives under the same application conditions, the proposed model expanded the application scope of Tm estimation and provided the global users with more choices in the applications of real-time GNSS-PWV retrival.


Author(s):  
Fabio Sabetta ◽  
Antonio Pugliese ◽  
Gabriele Fiorentino ◽  
Giovanni Lanzano ◽  
Lucia Luzi

AbstractThis work presents an up-to-date model for the simulation of non-stationary ground motions, including several novelties compared to the original study of Sabetta and Pugliese (Bull Seism Soc Am 86:337–352, 1996). The selection of the input motion in the framework of earthquake engineering has become progressively more important with the growing use of nonlinear dynamic analyses. Regardless of the increasing availability of large strong motion databases, ground motion records are not always available for a given earthquake scenario and site condition, requiring the adoption of simulated time series. Among the different techniques for the generation of ground motion records, we focused on the methods based on stochastic simulations, considering the time- frequency decomposition of the seismic ground motion. We updated the non-stationary stochastic model initially developed in Sabetta and Pugliese (Bull Seism Soc Am 86:337–352, 1996) and later modified by Pousse et al. (Bull Seism Soc Am 96:2103–2117, 2006) and Laurendeau et al. (Nonstationary stochastic simulation of strong ground-motion time histories: application to the Japanese database. 15 WCEE Lisbon, 2012). The model is based on the S-transform that implicitly considers both the amplitude and frequency modulation. The four model parameters required for the simulation are: Arias intensity, significant duration, central frequency, and frequency bandwidth. They were obtained from an empirical ground motion model calibrated using the accelerometric records included in the updated Italian strong-motion database ITACA. The simulated accelerograms show a good match with the ground motion model prediction of several amplitude and frequency measures, such as Arias intensity, peak acceleration, peak velocity, Fourier spectra, and response spectra.


Author(s):  
Stephen A Solovitz

Abstract Following volcanic eruptions, forecasters need accurate estimates of mass eruption rate (MER) to appropriately predict the downstream effects. Most analyses use simple correlations or models based on large eruptions at steady conditions, even though many volcanoes feature significant unsteadiness. To address this, a superposition model is developed based on a technique used for spray injection applications, which predicts plume height as a function of the time-varying exit velocity. This model can be inverted, providing estimates of MER using field observations of a plume. The model parameters are optimized using laboratory data for plumes with physically-relevant exit profiles and Reynolds numbers, resulting in predictions that agree to within 10% of measured exit velocities. The model performance is examined using a historic eruption from Stromboli with well-documented unsteadiness, again providing MER estimates of the correct order of magnitude. This method can provide a rapid alternative for real-time forecasting of small, unsteady eruptions.


Author(s):  
Shunki Nishii ◽  
Yudai Yamasaki

Abstract To achieve high thermal efficiency and low emission in automobile engines, advanced combustion technologies using compression autoignition of premixtures have been studied, and model-based control has attracted attention for their practical applications. Although simplified physical models have been developed for model-based control, appropriate values for their model parameters vary depending on the operating conditions, the engine driving environment, and the engine aging. Herein, we studied an onboard adaptation method of model parameters in a heat release rate (HRR) model. This method adapts the model parameters using neural networks (NNs) considering the operating conditions and can respond to the driving environment and the engine aging by training the NNs onboard. Detailed studies were conducted regarding the training methods. Furthermore, the effectiveness of this adaptation method was confirmed by evaluating the prediction accuracy of the HRR model and model-based control experiments.


2018 ◽  
Vol 22 (8) ◽  
pp. 4565-4581 ◽  
Author(s):  
Florian U. Jehn ◽  
Lutz Breuer ◽  
Tobias Houska ◽  
Konrad Bestian ◽  
Philipp Kraft

Abstract. The ambiguous representation of hydrological processes has led to the formulation of the multiple hypotheses approach in hydrological modeling, which requires new ways of model construction. However, most recent studies focus only on the comparison of predefined model structures or building a model step by step. This study tackles the problem the other way around: we start with one complex model structure, which includes all processes deemed to be important for the catchment. Next, we create 13 additional simplified models, where some of the processes from the starting structure are disabled. The performance of those models is evaluated using three objective functions (logarithmic Nash–Sutcliffe; percentage bias, PBIAS; and the ratio between the root mean square error and the standard deviation of the measured data). Through this incremental breakdown, we identify the most important processes and detect the restraining ones. This procedure allows constructing a more streamlined, subsequent 15th model with improved model performance, less uncertainty and higher model efficiency. We benchmark the original Model 1 and the final Model 15 with HBV Light. The final model is not able to outperform HBV Light, but we find that the incremental model breakdown leads to a structure with good model performance, fewer but more relevant processes and fewer model parameters.


2012 ◽  
Vol 16 (9) ◽  
pp. 3083-3099 ◽  
Author(s):  
H. Xie ◽  
L. Longuevergne ◽  
C. Ringler ◽  
B. R. Scanlon

Abstract. Irrigation development is rapidly expanding in mostly rainfed Sub-Saharan Africa. This expansion underscores the need for a more comprehensive understanding of water resources beyond surface water. Gravity Recovery and Climate Experiment (GRACE) satellites provide valuable information on spatio-temporal variability in water storage. The objective of this study was to calibrate and evaluate a semi-distributed regional-scale hydrologic model based on the Soil and Water Assessment Tool (SWAT) code for basins in Sub-Saharan Africa using seven-year (July 2002–April 2009) 10-day GRACE data and multi-site river discharge data. The analysis was conducted in a multi-criteria framework. In spite of the uncertainty arising from the tradeoff in optimising model parameters with respect to two non-commensurable criteria defined for two fluxes, SWAT was found to perform well in simulating total water storage variability in most areas of Sub-Saharan Africa, which have semi-arid and sub-humid climates, and that among various water storages represented in SWAT, water storage variations in soil, vadose zone and groundwater are dominant. The study also showed that the simulated total water storage variations tend to have less agreement with GRACE data in arid and equatorial humid regions, and model-based partitioning of total water storage variations into different water storage compartments may be highly uncertain. Thus, future work will be needed for model enhancement in these areas with inferior model fit and for uncertainty reduction in component-wise estimation of water storage variations.


2012 ◽  
Vol 12 (12) ◽  
pp. 3719-3732 ◽  
Author(s):  
L. Mediero ◽  
L. Garrote ◽  
A. Chavez-Jimenez

Abstract. Opportunities offered by high performance computing provide a significant degree of promise in the enhancement of the performance of real-time flood forecasting systems. In this paper, a real-time framework for probabilistic flood forecasting through data assimilation is presented. The distributed rainfall-runoff real-time interactive basin simulator (RIBS) model is selected to simulate the hydrological process in the basin. Although the RIBS model is deterministic, it is run in a probabilistic way through the results of calibration developed in a previous work performed by the authors that identifies the probability distribution functions that best characterise the most relevant model parameters. Adaptive techniques improve the result of flood forecasts because the model can be adapted to observations in real time as new information is available. The new adaptive forecast model based on genetic programming as a data assimilation technique is compared with the previously developed flood forecast model based on the calibration results. Both models are probabilistic as they generate an ensemble of hydrographs, taking the different uncertainties inherent in any forecast process into account. The Manzanares River basin was selected as a case study, with the process being computationally intensive as it requires simulation of many replicas of the ensemble in real time.


Sign in / Sign up

Export Citation Format

Share Document