Nonlinear Problems in Selectively Sensitive Identification of Structures

Author(s):  
Yakov Ben-Haim
Keyword(s):  
Author(s):  
Po Ting Lin ◽  
Wei-Hao Lu ◽  
Shu-Ping Lin

In the past few years, researchers have begun to investigate the existence of arbitrary uncertainties in the design optimization problems. Most traditional reliability-based design optimization (RBDO) methods transform the design space to the standard normal space for reliability analysis but may not work well when the random variables are arbitrarily distributed. It is because that the transformation to the standard normal space cannot be determined or the distribution type is unknown. The methods of Ensemble of Gaussian-based Reliability Analyses (EoGRA) and Ensemble of Gradient-based Transformed Reliability Analyses (EGTRA) have been developed to estimate the joint probability density function using the ensemble of kernel functions. EoGRA performs a series of Gaussian-based kernel reliability analyses and merged them together to compute the reliability of the design point. EGTRA transforms the design space to the single-variate design space toward the constraint gradient, where the kernel reliability analyses become much less costly. In this paper, a series of comprehensive investigations were performed to study the similarities and differences between EoGRA and EGTRA. The results showed that EGTRA performs accurate and effective reliability analyses for both linear and nonlinear problems. When the constraints are highly nonlinear, EGTRA may have little problem but still can be effective in terms of starting from deterministic optimal points. On the other hands, the sensitivity analyses of EoGRA may be ineffective when the random distribution is completely inside the feasible space or infeasible space. However, EoGRA can find acceptable design points when starting from deterministic optimal points. Moreover, EoGRA is capable of delivering estimated failure probability of each constraint during the optimization processes, which may be convenient for some applications.


2021 ◽  
Vol 73 (1) ◽  
Author(s):  
Keitaro Ohno ◽  
Yusaku Ohta ◽  
Satoshi Kawamoto ◽  
Satoshi Abe ◽  
Ryota Hino ◽  
...  

AbstractRapid estimation of the coseismic fault model for medium-to-large-sized earthquakes is key for disaster response. To estimate the coseismic fault model for large earthquakes, the Geospatial Information Authority of Japan and Tohoku University have jointly developed a real-time GEONET analysis system for rapid deformation monitoring (REGARD). REGARD can estimate the single rectangular fault model and slip distribution along the assumed plate interface. The single rectangular fault model is useful as a first-order approximation of a medium-to-large earthquake. However, in its estimation, it is difficult to obtain accurate results for model parameters due to the strong effect of initial values. To solve this problem, this study proposes a new method to estimate the coseismic fault model and model uncertainties in real time based on the Bayesian inversion approach using the Markov Chain Monte Carlo (MCMC) method. The MCMC approach is computationally expensive and hyperparameters should be defined in advance via trial and error. The sampling efficiency was improved using a parallel tempering method, and an automatic definition method for hyperparameters was developed for real-time use. The calculation time was within 30 s for 1 × 106 samples using a typical single LINUX server, which can implement real-time analysis, similar to REGARD. The reliability of the developed method was evaluated using data from recent earthquakes (2016 Kumamoto and 2019 Yamagata-Oki earthquakes). Simulations of the earthquakes in the Sea of Japan were also conducted exhaustively. The results showed an advantage over the maximum likelihood approach with a priori information, which has initial value dependence in nonlinear problems. In terms of application to data with a small signal-to-noise ratio, the results suggest the possibility of using several conjugate fault models. There is a tradeoff between the fault area and slip amount, especially for offshore earthquakes, which means that quantification of the uncertainty enables us to evaluate the reliability of the fault model estimation results in real time.


2021 ◽  
Vol 87 (2) ◽  
Author(s):  
Konrad Simon ◽  
Jörn Behrens

AbstractWe introduce a new framework of numerical multiscale methods for advection-dominated problems motivated by climate sciences. Current numerical multiscale methods (MsFEM) work well on stationary elliptic problems but have difficulties when the model involves dominant lower order terms. Our idea to overcome the associated difficulties is a semi-Lagrangian based reconstruction of subgrid variability into a multiscale basis by solving many local inverse problems. Globally the method looks like a Eulerian method with multiscale stabilized basis. We show example runs in one and two dimensions and a comparison to standard methods to support our ideas and discuss possible extensions to other types of Galerkin methods, higher dimensions and nonlinear problems.


Mathematics ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 610
Author(s):  
Chunbao Li ◽  
Hui Cao ◽  
Mengxin Han ◽  
Pengju Qin ◽  
Xiaohui Liu

The marine derrick sometimes operates under extreme weather conditions, especially wind; therefore, the buckling analysis of the components in the derrick is one of the critical contents of engineering safety research. This paper aimed to study the local stability of marine derrick and propose an analytical method for geometrically nonlinear problems. The rod in the derrick is simplified as a compression rod with simply supported ends, which is subjected to transverse uniform load. Considering the second-order effect, the differential equations were used to establish the deflection, rotation angle, and bending moment equations of the derrick rod under the lateral uniform load. This method was defined as a geometrically nonlinear analytical method. Moreover, the deflection deformation and stability of the derrick members were analyzed, and the practical calculation formula was obtained. The Ansys analysis results were compared with the calculation results in this paper.


2021 ◽  
Vol 11 (7) ◽  
pp. 3059
Author(s):  
Myeong-Hun Jeong ◽  
Tae-Young Lee ◽  
Seung-Bae Jeon ◽  
Minkyo Youm

Movement analytics and mobility insights play a crucial role in urban planning and transportation management. The plethora of mobility data sources, such as GPS trajectories, poses new challenges and opportunities for understanding and predicting movement patterns. In this study, we predict highway speed using a gated recurrent unit (GRU) neural network. Based on statistical models, previous approaches suffer from the inherited features of traffic data, such as nonlinear problems. The proposed method predicts highway speed based on the GRU method after training on digital tachograph data (DTG). The DTG data were recorded in one month, giving approximately 300 million records. These data included the velocity and locations of vehicles on the highway. Experimental results demonstrate that the GRU-based deep learning approach outperformed the state-of-the-art alternatives, the autoregressive integrated moving average model, and the long short-term neural network (LSTM) model, in terms of prediction accuracy. Further, the computational cost of the GRU model was lower than that of the LSTM. The proposed method can be applied to traffic prediction and intelligent transportation systems.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1590
Author(s):  
Arnak Poghosyan ◽  
Ashot Harutyunyan ◽  
Naira Grigoryan ◽  
Clement Pang ◽  
George Oganesyan ◽  
...  

The main purpose of an application performance monitoring/management (APM) software is to ensure the highest availability, efficiency and security of applications. An APM software accomplishes the main goals through automation, measurements, analysis and diagnostics. Gartner specifies the three crucial capabilities of APM softwares. The first is an end-user experience monitoring for revealing the interactions of users with application and infrastructure components. The second is application discovery, diagnostics and tracing. The third key component is machine learning (ML) and artificial intelligence (AI) powered data analytics for predictions, anomaly detection, event correlations and root cause analysis. Time series metrics, logs and traces are the three pillars of observability and the valuable source of information for IT operations. Accurate, scalable and robust time series forecasting and anomaly detection are the requested capabilities of the analytics. Approaches based on neural networks (NN) and deep learning gain an increasing popularity due to their flexibility and ability to tackle complex nonlinear problems. However, some of the disadvantages of NN-based models for distributed cloud applications mitigate expectations and require specific approaches. We demonstrate how NN-models, pretrained on a global time series database, can be applied to customer specific data using transfer learning. In general, NN-models adequately operate only on stationary time series. Application to nonstationary time series requires multilayer data processing including hypothesis testing for data categorization, category specific transformations into stationary data, forecasting and backward transformations. We present the mathematical background of this approach and discuss experimental results based on implementation for Wavefront by VMware (an APM software) while monitoring real customer cloud environments.


Sign in / Sign up

Export Citation Format

Share Document