scholarly journals On the Use of Cloud Analysis for Structural Glass Members under Seismic Events

2021 ◽  
Vol 13 (16) ◽  
pp. 9291
Author(s):  
Silvana Mattei ◽  
Marco Fasan ◽  
Chiara Bedon

Current standards for seismic-resistant buildings provide recommendations for various structural systems, but no specific provisions are given for structural glass. As such, the seismic design of joints and members could result in improper sizing and non-efficient solutions, or even non-efficient calculation procedures. An open issue is represented by the lack of reliable and generalized performance limit indicators (or “engineering demand parameters”, EDPs) for glass structures, which represent the basic input for seismic analyses or q-factor estimates. In this paper, special care is given to the q-factor assessment for glass frames under in-plane seismic loads. Major advantage is taken from efficient finite element (FE) numerical simulations to support the local/global analysis of mechanical behaviors. From extensive non-linear dynamic parametric calculations, numerical outcomes are discussed based on three different approaches that are deeply consolidated for ordinary structural systems. Among others, the cloud analysis is characterized by high computational efficiency, but requires the definition of specific EDPs, as well as the choice of reliable input seismic signals. In this regard, a comparative parametric study is carried out with the support of the incremental dynamic analysis (IDA) approach for the herein called “dynamic” (M1) and “mixed” (M2) procedures, towards the linear regression of cloud analysis data (M3). Potential and limits of selected calculation methods are hence discussed, with a focus on sample size, computational cost, estimated mechanical phenomena, and predicted q-factor estimates for a case study glass frame.

2017 ◽  
Vol 2017 ◽  
pp. 1-15 ◽  
Author(s):  
Jimmy Chi Hung Fung ◽  
Guangze Gao

The ability of numerical simulations to predict typhoons has been improved in recent decades. Although the track prediction is satisfactory, the intensity prediction is still far from adequate. Vortex initialization is an efficient method to improve the estimations of the initial conditions for typhoon forecasting. In this paper, a new vortex initialization scheme is developed and evaluated. The scheme requires only observational data of the radius of maximum wind and the max wind speed in addition to the global analysis data. This scheme can also satisfy the vortex boundary conditions, which means that the vortex is continuously merged into the background environment. The scheme has a low computational cost and has the flexibility to adjust the vortex structure. It was evaluated with 3 metrics: track, center sea-level pressure (CSLP), and maximum surface wind speed (MWSP). Simulations were conducted using the WRF-ARW numerical weather prediction model. Super and severe typhoon cases with insufficiently strong initial MWSP were simulated without and with the vortex initialization scheme. The simulation results were compared with the 6-hourly observational data from Hong Kong Observatory (HKO). The vortex initialization scheme improved the intensity (CSLP and MWSP) prediction results. The scheme was also compared with other initialization methods and schemes.


Electronics ◽  
2021 ◽  
Vol 10 (15) ◽  
pp. 1771
Author(s):  
Ferdinando Di Martino ◽  
Irina Perfilieva ◽  
Salvatore Sessa

Fuzzy transform is a technique applied to approximate a function of one or more variables applied by researchers in various image and data analysis. In this work we present a summary of a fuzzy transform method proposed in recent years in different data mining disciplines, such as the detection of relationships between features and the extraction of association rules, time series analysis, data classification. After having given the definition of the concept of Fuzzy Transform in one or more dimensions in which the constraint of sufficient data density with respect to fuzzy partitions is also explored, the data analysis approaches recently proposed in the literature based on the use of the Fuzzy Transform are analyzed. In particular, the strategies adopted in these approaches for managing the constraint of sufficient data density and the performance results obtained, compared with those measured by adopting other methods in the literature, are explored. The last section is dedicated to final considerations and future scenarios for using the Fuzzy Transform for the analysis of massive and high-dimensional data.


2013 ◽  
Vol 3 (2) ◽  
pp. 120-137 ◽  
Author(s):  
Jan Brandts ◽  
Ricardo R. da Silva

AbstractGiven two n × n matrices A and A0 and a sequence of subspaces with dim the k-th subspace-projected approximated matrix Ak is defined as Ak = A + Πk(A0 − A)Πk, where Πk is the orthogonal projection on . Consequently, Akν = Aν and ν*Ak = ν*A for all Thus is a sequence of matrices that gradually changes from A0 into An = A. In principle, the definition of may depend on properties of Ak, which can be exploited to try to force Ak+1 to be closer to A in some specific sense. By choosing A0 as a simple approximation of A, this turns the subspace-approximated matrices into interesting preconditioners for linear algebra problems involving A. In the context of eigenvalue problems, they appeared in this role in Shepard et al. (2001), resulting in their Subspace Projected Approximate Matrix method. In this article, we investigate their use in solving linear systems of equations Ax = b. In particular, we seek conditions under which the solutions xk of the approximate systems Akxk = b are computable at low computational cost, so the efficiency of the corresponding method is competitive with existing methods such as the Conjugate Gradient and the Minimal Residual methods. We also consider how well the sequence (xk)k≥0 approximates x, by performing some illustrative numerical tests.


2017 ◽  
Vol 866 ◽  
pp. 108-111
Author(s):  
Theerapan Saesong ◽  
Pakpoom Ratjiranukool ◽  
Sujittra Ratjiranukool

Numerical Weather Model called The Weather Research and Forecasting model, WRF, developed by National Center for Atmospheric Research (NCAR) is adapted to be regional climate model. The model is run to perform the daily mean air surface temperatures over northern Thailand in 2010. Boundery dataset provided by National Centers for Environmental Prediction, NCEP FNL, (Final) Operational Global Analysis data which are on 10 x 10. The simulated temperatures by WRF with four land surface options, i.e., no land surface scheme (option 0), thermal diffusion (option 1), Noah land-surface (option 2) and RUC land-surface (option 3) were compared against observational data from Thai Meteorological Department (TMD). Preliminary analysis indicated WRF simulations with Noah scheme were able to reproduce the most reliable daily mean temperatures over northern Thailand.


Entropy ◽  
2019 ◽  
Vol 21 (8) ◽  
pp. 763 ◽  
Author(s):  
Alaa Sagheer ◽  
Mohammed Zidan ◽  
Mohammed M. Abdelsamea

Pattern classification represents a challenging problem in machine learning and data science research domains, especially when there is a limited availability of training samples. In recent years, artificial neural network (ANN) algorithms have demonstrated astonishing performance when compared to traditional generative and discriminative classification algorithms. However, due to the complexity of classical ANN architectures, ANNs are sometimes incapable of providing efficient solutions when addressing complex distribution problems. Motivated by the mathematical definition of a quantum bit (qubit), we propose a novel autonomous perceptron model (APM) that can solve the problem of the architecture complexity of traditional ANNs. APM is a nonlinear classification model that has a simple and fixed architecture inspired by the computational superposition power of the qubit. The proposed perceptron is able to construct the activation operators autonomously after a limited number of iterations. Several experiments using various datasets are conducted, where all the empirical results show the superiority of the proposed model as a classifier in terms of accuracy and computational time when it is compared with baseline classification models.


2017 ◽  
Vol 17 (01) ◽  
pp. 1750015 ◽  
Author(s):  
R. Emre Erkmen ◽  
Magdi Mohareb ◽  
Ashkan Afnani

Elevated pipelines are commonly encountered in petro-chemical and industrial applications. Within these applications, pipelines normally span hundreds of meters and are thus analyzed using one-dimensional (1D) beam-type finite elements when the global behavior of the pipeline is sought at a reasonably low computational cost. Standard beam-type elements, while computationaly economic, are based on the assumption of rigid cross-section. Thus, they are unable to capture the effects of cross-sectional localized deformations. Such effects can be captured through shell-type finite element models. For long pipelines, shell models become prohibitively expensive. Within this context, the present study formulates an efficient numerical modeling which effectively combines the efficiency of beam-type solutions while retaining the accuracy of shell-type solutions. An appealing feature of the model is that it is able to split the global analysis based on simple beam-type elements from the local analysis based on shell-type elements. This is achieved through domain-decomposition procedure within the framework of the Bridging multi-scale method of analysis. Solutions based on the present model are compared to those based on full shell-type analysis. The comparison demonstrates the accuracy and efficiency of the proposed method.


Author(s):  
Marcos V. Rodrigues ◽  
Caroline Ferraz ◽  
Danilo Machado L. da Silva ◽  
Bruna Nabuco

With new discoveries in the Brazilian Pre-Salt area, the oil industry is facing huge challenges for exploration in ultra-deep waters. The riser system, to be used for the oil transportation from seabed to the production unit, is one of them. The definition of riser configurations for ultra-deep waters is a real challenge. Problems have being identified for flexible risers, hybrid risers and steel catenary risers (SCR) configurations to comply with rules requirements and criteria in water depths of 2000m. The objective of this work is to present a study on the fatigue behavior of a Steel Catenary Riser in 1800m of water depth. One of the main challenges for SCRs in ultra-deep waters is the fatigue, due to platform 1st order motions, at the touch down zone (TDZ). A case study is presented for a Steel Catenary Riser connected to a semi-submersible platform. The influence of some design and analysis parameters is studied in order to evaluate their impact on the SCR fatigue life. The main parameters to be evaluated in this work are: The mesh refinement, in the global analysis, at the Touch Down Zone; The internal fluid density variation along the riser, and; The 1st order platform motions applied to the top of riser; In addition to the results of this paper, some highlights are presented for SCR analysis in similar conditions.


Author(s):  
Lei Cheng ◽  
Zhenzhou Lu ◽  
Luyi Li

For the structural systems with both epistemic and aleatory uncertainties, in order to analyze the effects of different regions of epistemic parameters on failure probability, two regional importance measures (RIMs) are firstly proposed, i.e. contribution to mean of failure probability (CMFP) and contribution to variance of failure probability (CVFP), and their properties are analyzed and verified. Then, to analyze the effect of different regions of the epistemic parameters on their corresponding first-order variance (i.e. main effect) in the Sobol’s variance decomposition, another RIM is proposed which is named as contribution to variance of conditional mean of failure probability (CVCFP). The proposed CVCFP is then extended to define another RIM named as contribution to mean of conditional mean of failure probability, i.e. CMCFP, to measure the contribution of regions of epistemic parameters to mean of conditional mean of failure probability. For the problem that the computational cost for calculating the conditional mean of failure probability may be too large to be accepted, the state dependent parameter (SDP) method is introduced to estimate CVCFP and CMCFP. Several examples are used to demonstrate the effectiveness of the proposed RIMs and the efficiency and accuracy of the SDP-based method are also demonstrated by the examples.


Author(s):  
Noriyasu Hirokawa ◽  
Kikuo Fujita

This paper proposes a mini-max type formulation for strict robust design optimization under correlative variation based on design variation hyper sphere and quadratic polynomial approximation. While various types of formulations and techniques have been developed for computational robust design, they confront the compromise among modeling of parameter variation, feasibility assessment, definition of optimality such as sensitivity, and computational cost. The formulation of this paper aims that all points within the distribution region are thoroughly optimized. For this purpose, the design space with correlative variation is diagonalized and isoparameterized into a hyper sphere, and the functions of nominal constraints and the nominal objective are modeled as quadratic polynomials. These transformation and approximation enable the analytical discrimination of inner or boundary type on the worst design and its quantified values with less computation cost under a certain condition, and bring the procedural definition of the strictly robust optimality of a design as a maximization problem. The minimization of this formulation, that is, mini-max type optimization, can find the robust design under the above meaning. Its validity is ascertained through numerical examples.


Sign in / Sign up

Export Citation Format

Share Document