A Study of Power Spectral Density Models of Earthquake Ground Motion

2011 ◽  
Vol 90-93 ◽  
pp. 1503-1510
Author(s):  
Fu Jun Liu ◽  
Yu Hua Zhu ◽  
Xiao Hui Ma

In this paper, a modified random process model of earthquake ground motion based on the model proposed by JinPing Ou is presented. The parameters in the model except the factor S0 are determined by using the least square method and the power spectral densities of 361 earthquake records. Then the method for determining the parameter S0 is proposed. The good performance of the proposed model in this paper in modeling the earthquake ground motion on firm ground is demonstrated by comparing it with other random process models.

Author(s):  
Kuo Liu ◽  
Haibo Liu ◽  
Te Li ◽  
Yongqing Wang ◽  
Mingjia Sun ◽  
...  

The conception of the comprehensive thermal error of servo axes is given. Thermal characteristics of a preloaded ball screw on a gantry milling machine is investigated, and the error and temperature data are obtained. The comprehensive thermal error is divided into two parts: thermal expansion error ((TEE) in the stroke range) and thermal drift error ((TDE) of origin). The thermal mechanism and thermal error variation of preloaded ball screw are expounded. Based on the generation, conduction, and convection theory of heat, the thermal field models of screw caused by friction of screw-nut pairs and bearing blocks are derived. The prediction for TEE is presented based on thermal fields of multiheat sources. Besides, the factors influencing TDE are analyzed, and the model of TDE is established based on the least square method. The predicted thermal field of the screw is analyzed. The simulation and experimental results indicate that high accuracy stability can be obtained using the proposed model. Moreover, high accuracy stability can still be achieved even if the moving state of servo axis changes randomly, the screw is preloaded, and the thermal deformation process is complex. Strong robustness of the model is verified.


Author(s):  
Frederik Ahlemann ◽  
Heike Gastl

This chapter stresses the importance of integrating empirical evidence in the construction process of reference models. With reference to the authors’ underly-ing epistemological beliefs, requirements for an empirically grounded process model are derived. Based on a literature review of existing process models and experience gained from three research projects, an advanced process model is proposed in order to provide concrete instructions that show how these require-ments can be met. Real-life examples from completed and ongoing research pro-jects are continuously integrated so as to contribute to the practicability of the proposed model for the reader.


Author(s):  
Hongyi Xu ◽  
Zhen Jiang ◽  
Daniel W. Apley ◽  
Wei Chen

Data-driven random process models have become increasingly important for uncertainty quantification (UQ) in science and engineering applications, due to their merit of capturing both the marginal distributions and the correlations of high-dimensional responses. However, the choice of a random process model is neither unique nor straightforward. To quantitatively validate the accuracy of random process UQ models, new metrics are needed to measure their capability in capturing the statistical information of high-dimensional data collected from simulations or experimental tests. In this work, two goodness-of-fit (GOF) metrics, namely, a statistical moment-based metric (SMM) and an M-margin U-pooling metric (MUPM), are proposed for comparing different stochastic models, taking into account their capabilities of capturing the marginal distributions and the correlations in spatial/temporal domains. This work demonstrates the effectiveness of the two proposed metrics by comparing the accuracies of four random process models (Gaussian process (GP), Gaussian copula, Hermite polynomial chaos expansion (PCE), and Karhunen–Loeve (K–L) expansion) in multiple numerical examples and an engineering example of stochastic analysis of microstructural materials properties. In addition to the new metrics, this paper provides insights into the pros and cons of various data-driven random process models in UQ.


1971 ◽  
Vol 93 (3) ◽  
pp. 398-407 ◽  
Author(s):  
P. Ranganath Nayak

Rough surfaces are modeled as two-dimensional, isotropic, Gaussian random processes, and analyzed with the techniques of random process theory. Such surface statistics as the distribution of summit heights, the density of summits, the mean surface gradient, and the mean curvature of summits are related to the power spectral density of a profile of the surface. A detailed comparison is made of the statistics of the surface and those of the profile, and serious differences are found in the distributions of heights of maxima and in the mean gradients. Techniques for analyzing profiles of random surfaces to obtain the parameters necessary for the analysis of the surface are discussed. Extensions of the theory to nonisotropic Gaussian surfaces are indicated.


2012 ◽  
Vol 466-467 ◽  
pp. 961-965 ◽  
Author(s):  
Chun Li Lei ◽  
Zhi Yuan Rui ◽  
Jun Liu ◽  
Li Na Ren

To improve the manufacturing accuracy of NC machine tool, the thermal error model based on multivariate autoregressive method for a motorized high speed spindle is developed. The proposed model takes into account influences of the previous temperature rise and thermal deformation (input variables) on the thermal error (output variables). The linear trends of observed series are eliminated by the first difference. The order of multivariate autoregressive (MVAR) model is selected by using Akaike information criterion. The coefficients of the MVAR model are determined by the least square method. The established MVAR model is then used to forecast the thermal error and the experimental results have shown the validity and robustness of this model.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Binod Adhikari ◽  
Subodh Dahal ◽  
Monika Karki ◽  
Roshan Kumar Mishra ◽  
Ranjan Kumar Dahal ◽  
...  

AbstractIn this paper, we estimate the seismogenic energy during the Nepal Earthquake (25 April 2015) and studied the ground motion time-frequency characteristics in Kathmandu valley. The idea to analyze time-frequency characteristic of seismogenic energy signal is based on wavelet transform which we employed here. Wavelet transform has been used as a powerful signal analysis tools in various fields like compression, time-frequency analysis, earthquake parameter determination, climate studies, etc. This technique is particularly suitable for non-stationary signal. It is well recognized that the earthquake ground motion is a non-stationary random process. In order to characterize a non-stationary random process, it is required immeasurable samples in the mathematical sense. The wavelet transformation procedures that we follow here helps in random analyses of linear and non-linear structural systems, which are subjected to earthquake ground motion. The manners of seismic ground motion are characterized through wavelet coefficients associated to these signals. Both continuous wavelet transform (CWT) and discrete wavelet transform (DWT) techniques are applied to study ground motion in Kathmandu Valley in horizontal and vertical directions. These techniques help to point out the long-period ground motion with site response. We found that the long-period ground motions have enough power for structural damage. Comparing both the horizontal and the vertical motion, we observed that the most of the high amplitude signals are associated with the vertical motion: the high energy is released in that direction. It is found that the seismic energy is damped soon after the main event; however the period of damping is different. This can be seen on DWT curve where square wavelet coefficient is high at the time of aftershock and the value decrease with time. In other words, it is mostly associated with the arrival of Rayleigh waves. We concluded that long-period ground motions should be studied by earthquake engineers in order to avoid structural damage during the earthquake. Hence, by using wavelet technique we can specify the vulnerability of seismically active region and local topological features out there.


2018 ◽  
Vol 30 (2) ◽  
Author(s):  
Ronald George Leppan ◽  
Reinhardt A Botha ◽  
Johan F Van Niekerk

Higher education institutions seem to have a haphazard approach to harnessing the ubiquitous data that learners generate on online educational platforms, despite promising opportunities offered by this data. Several learning analytics process models have been proposed to optimise the learning environment based on this learner data. The model proposed in this paper addresses deficiencies in existing learning analytics models that frequently emphasises only the technical aspects of data collection, analysis and intervention, yet remain silent on ethical issues inherent in collecting and analysing student data and pedagogy-based approaches to the interventions. The proposed model describes how differentiated instruction can be provided based on a dynamic learner profile built through an ethical learning analytics process. Differentiated instruction optimises online learning through recommending learning objects tailored towards the learner attributes stored in a learner profile. The proposed model provides a systematic and comprehensive abstraction of a differentiated learning design process informed by learning analytics. The model emerged by synthesising steps of a tried-and-tested web analytics process with educational theory, an ethical learning analytics code of practice, principles of adaptive education systems and a layered abstraction of online learning design.


Energies ◽  
2020 ◽  
Vol 13 (21) ◽  
pp. 5592
Author(s):  
Waqar Muhammad Ashraf ◽  
Ghulam Moeen Uddin ◽  
Syed Muhammad Arafat ◽  
Sher Afghan ◽  
Ahmad Hassan Kamal ◽  
...  

This paper presents a comprehensive step-wise methodology for implementing industry 4.0 in a functional coal power plant. The overall efficiency of a 660 MWe supercritical coal-fired plant using real operational data is considered in the study. Conventional and advanced AI-based techniques are used to present comprehensive data visualization. Monte-Carlo experimentation on artificial neural network (ANN) and least square support vector machine (LSSVM) process models and interval adjoint significance analysis (IASA) are performed to eliminate insignificant control variables. Effective and validated ANN and LSSVM process models are developed and comprehensively compared. The ANN process model proved to be significantly more effective; especially, in terms of the capacity to be deployed as a robust and reliable AI model for industrial data analysis and decision making. A detailed investigation of efficient power generation is presented under 50%, 75%, and 100% power plant unit load. Up to 7.20%, 6.85%, and 8.60% savings in heat input values are identified at 50%, 75%, and 100% unit load, respectively, without compromising the power plant’s overall thermal efficiency.


Mathematics ◽  
2019 ◽  
Vol 7 (12) ◽  
pp. 1155
Author(s):  
Chen ◽  
Huang

: Identifying the fuzzy measures of the Choquet integral model is an important component in resolving complicated multi-criteria decision-making (MCDM) problems. Previous papers solved the above problem by using various mathematical programming models and regression-based methods. However, when considering complicated MCDM problems (e.g., 10 criteria), the presence of too many parameters might result in unavailable or inconsistent solutions. While k-additive or p-symmetric measures are provided to reduce the number of fuzzy measures, they cannot prevent the problem of identifying the fuzzy measures in a high-dimension situation. Therefore, Sugeno and his colleagues proposed a hierarchical Choquet integral model to overcome the problem, but it required the partition information of the criteria, which usually cannot be obtained in practice. In this paper, we proposed a GA-based heuristic least mean-squares algorithm (HLMS) to construct the hierarchical Choquet integral and overcame the above problems. The genetic algorithm (GA) was used here to determine the input variables of the sub-Choquet integrals automatically, according to the objective of the mean square error (MSE), and calculated the fuzzy measures with the HLMS. Then, we summed these sub-Choquet integrals into the final Choquet integral for the purpose of regression or classification. In addition, we tested our method with four datasets and compared these results with the conventional Choquet integral, logit model, and neural network. On the basis of the results, the proposed model was competitive with respect to other models.


Sign in / Sign up

Export Citation Format

Share Document