Principal components of frequency domain electromyograms for muscular fatigue analysis

Author(s):  
Igor Ramathur Telles de Jesus ◽  
R G T Mello ◽  
J Nadal
Author(s):  
Renner Egalon Pereira ◽  
Pedro Henrique Alves Correa ◽  
Jorge Alberto Rodriguez Duran

Author(s):  
Marco Lippi

High-Dimensional Dynamic Factor Models have their origin in macroeconomics, precisely in empirical research on Business Cycles. The central idea, going back to the work of Burns and Mitchell in the years 1940, is that the fluctuations of all the macro and sectoral variables in the economy are driven by a “reference cycle,” that is, a one-dimensional latent cause of variation. After a fairly long process of generalization and formalization, the literature settled at the beginning of the year 2000 on a model in which (1) both n the number of variables in the dataset and T, the number of observations for each variable, may be large, and (2) all the variables in the dataset depend dynamically on a fixed independent of n, a number of “common factors,” plus variable-specific, usually called “idiosyncratic,” components. The structure of the model can be exemplified as follows: xit=αiut+βiut−1+ξit,i=1,…,n,t=1,…,T,(*) where the observable variables xit are driven by the white noise ut, which is common to all the variables, the common factor, and by the idiosyncratic component ξit. The common factor ut is orthogonal to the idiosyncratic components ξit, the idiosyncratic components are mutually orthogonal (or weakly correlated). Lastly, the variations of the common factor ut affect the variable xit dynamically, that is through the lag polynomial αi+βiL. Asymptotic results for High-Dimensional Factor Models, particularly consistency of estimators of the common factors, are obtained for both n and T tending to infinity. Model (∗), generalized to allow for more than one common factor and a rich dynamic loading of the factors, has been studied in a fairly vast literature, with many applications based on macroeconomic datasets: (a) forecasting of inflation, industrial production, and unemployment; (b) structural macroeconomic analysis; and (c) construction of indicators of the Business Cycle. This literature can be broadly classified as belonging to the time- or the frequency-domain approach. The works based on the second are the subject of the present chapter. We start with a brief description of early work on Dynamic Factor Models. Formal definitions and the main Representation Theorem follow. The latter determines the number of common factors in the model by means of the spectral density matrix of the vector (x1tx2t⋯xnt). Dynamic principal components, based on the spectral density of the x’s, are then used to construct estimators of the common factors. These results, obtained in early 2000, are compared to the literature based on the time-domain approach, in which the covariance matrix of the x’s and its (static) principal components are used instead of the spectral density and dynamic principal components. Dynamic principal components produce two-sided estimators, which are good within the sample but unfit for forecasting. The estimators based on the time-domain approach are simple and one-sided. However, they require the restriction of finite dimension for the space spanned by the factors. Recent papers have constructed one-sided estimators based on the frequency-domain method for the unrestricted model. These results exploit results on stochastic processes of dimension n that are driven by a q-dimensional white noise, with q<n, that is, singular vector stochastic processes. The main features of this literature are described with some detail. Lastly, we report and comment the results of an empirical paper, the last in a long list, comparing predictions obtained with time- and frequency-domain methods. The paper uses a large monthly U.S. dataset including the Great Moderation and the Great Recession.


2021 ◽  
Author(s):  
Jiabei Yuan ◽  
Yucheng Hou ◽  
Zhimin Tan

Abstract Fatigue analysis of flexible risers is a demanding task in terms of time and computational resources. The traditional time domain approach may take weeks of time in global simulation, local modelling and post-processing of riser responses to get fatigue results. Baker Hughes developed a fast hybrid approach, which is based on a frequency domain technique. The new approach was first implemented at the end fitting region and then to all other regions of the riser. Studies showed that the hybrid approach achieved convenient and conservative results in a significant shorter period of time. To improve the accuracy and reduce conservatism of the method, Baker Hughes has further optimized the analysis procedure to seek better results approaching true solutions. Several methods were proposed and studied. The duration of representative cases and noncritical cases have been extended. The steps to predict stress spectrum based on transfer functions have also been updated. From previous studies, only one transfer function was built for fatigue load cases with similar response spectra. This assumption linearizes the system response and produces certain level of discrepancy against true time domain solution. In this study, multiple ways of spectrum prediction are evaluated and compared. The paper summarizes several techniques to further optimize the hybrid frequency domain approach. The updated fatigue results are found to be more accurate. The optimized approach therefore gives more flexibility to engineers to approach the true solutions, which were originally acquired from full 3-hr time domain simulations. The approach requires less analysis time and reduces iterations in pipe structure and riser configuration design, which leads to faster project execution and potential cost reduction.


Sign in / Sign up

Export Citation Format

Share Document