scholarly journals Macro-meso scale simulations of 3D woven composite reinforcements during the forming process

2021 ◽  
Author(s):  
Jie Wang ◽  
Peng Wang ◽  
Nahiène Hamila ◽  
Philippe Boisse

During the forming stage in the RTM process, deformations and orientations of yarns at the mesoscopic scale are essential to evaluate mechanical behaviors of final composite products and calculate the permeability of the reinforcement. However, due to the high computational cost, it is very difficult to carry out a mesoscopic draping simulation for the entire reinforcement. In this paper, a macro-meso scale simulation of composite reinforcements is presented in order to predict mesoscopic deformations of the fabric in a reasonable calculation time. The proposed multi-scale method allows linking the macroscopic simulation of the reinforcement with the mesoscopic modelling of the RVE through a macromeso embedded analysis. On the base of macroscopic simulations using a hyperelastic constitutive law of the reinforcement, an embedded mesoscopic geometry is first deduced from the macroscopic simulation of the draping. To overcome the inconvenience of the macro-meso embedded solution which leads to unreal excessive yarn extensions, local mesoscopic simulations based on the embedded analysis are carried out on a single RVE by defining specific boundary conditions. Finally, the multi-scale forming simulations are investigated in comparison with the experimental results, illustrating the efficiency of the proposed approach, in terms of accuracy and CPU time.

Author(s):  
Muhammad S. Sarfaraz ◽  
Bojana V. Rosić ◽  
Hermann G. Matthies ◽  
Adnan Ibrahimbegović

AbstractMulti-scale processes governed on each scale by separate principles for evolution or equilibrium are coupled by matching the stored energy and dissipation in line with the Hill-Mandel principle. We are interested in cementitious materials, and consider here the macro- and meso-scale behaviour of such a material. The accurate representations of stored energy and dissipation are essential for the depiction of irreversible material behaviour, and here a Bayesian approach is used to match these quantities on different scales. This is a probabilistic upscaling and as such allows to capture, among other things, the loss of resolution due to scale coarsening, possible model errors, localisation effects, and the geometric and material randomness of the meso-scale constituents in the upscaling. On the coarser (macro) scale, optimal material parameters are estimated probabilistically for certain possible behaviours from the class of generalised standard material models by employing a nonlinear approximation of Bayes’s rule. To reduce the overall computational cost, a model reduction of the meso-scale simulation is achieved by combining unsupervised learning techniques based on a Bayesian copula variational inference with functional approximation forms.


2012 ◽  
Vol 2 (1) ◽  
pp. 7-9 ◽  
Author(s):  
Satinderjit Singh

Median filtering is a commonly used technique in image processing. The main problem of the median filter is its high computational cost (for sorting N pixels, the temporal complexity is O(N·log N), even with the most efficient sorting algorithms). When the median filter must be carried out in real time, the software implementation in general-purpose processorsdoes not usually give good results. This Paper presents an efficient algorithm for median filtering with a 3x3 filter kernel with only about 9 comparisons per pixel using spatial coherence between neighboring filter computations. The basic algorithm calculates two medians in one step and reuses sorted slices of three vertical neighboring pixels. An extension of this algorithm for 2D spatial coherence is also examined, which calculates four medians per step.


1995 ◽  
Vol 32 (2) ◽  
pp. 95-103
Author(s):  
José A. Revilla ◽  
Kalin N. Koev ◽  
Rafael Díaz ◽  
César Álvarez ◽  
Antonio Roldán

One factor in determining the transport capacity of coastal interceptors in Combined Sewer Systems (CSS) is the reduction of Dissolved Oxygen (DO) in coastal waters originating from the overflows. The study of the evolution of DO in coastal zones is complex. The high computational cost of using mathematical models discriminates against the required probabilistic analysis being undertaken. Alternative methods, based on such mathematical modelling, employed in a limited number of cases, are therefore needed. In this paper two alternative methods are presented for the study of oxygen deficit resulting from overflows of CSS. In the first, statistical analyses focus on the causes of the deficit (the volume discharged). The second concentrates on the effects (the concentrations of oxygen in the sea). Both methods have been applied in a study of the coastal interceptor at Pasajes Estuary (Guipúzcoa, Spain) with similar results.


Mathematics ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 891
Author(s):  
Aurea Grané ◽  
Alpha A. Sow-Barry

This work provides a procedure with which to construct and visualize profiles, i.e., groups of individuals with similar characteristics, for weighted and mixed data by combining two classical multivariate techniques, multidimensional scaling (MDS) and the k-prototypes clustering algorithm. The well-known drawback of classical MDS in large datasets is circumvented by selecting a small random sample of the dataset, whose individuals are clustered by means of an adapted version of the k-prototypes algorithm and mapped via classical MDS. Gower’s interpolation formula is used to project remaining individuals onto the previous configuration. In all the process, Gower’s distance is used to measure the proximity between individuals. The methodology is illustrated on a real dataset, obtained from the Survey of Health, Ageing and Retirement in Europe (SHARE), which was carried out in 19 countries and represents over 124 million aged individuals in Europe. The performance of the method was evaluated through a simulation study, whose results point out that the new proposal solves the high computational cost of the classical MDS with low error.


Author(s):  
Seyede Vahide Hashemi ◽  
Mahmoud Miri ◽  
Mohsen Rashki ◽  
Sadegh Etedali

This paper aims to carry out sensitivity analyses to study how the effect of each design variable on the performance of self-centering buckling restrained brace (SC-BRB) and the corresponding buckling restrained brace (BRB) without shape memory alloy (SMA) rods. Furthermore, the reliability analyses of BRB and SC-BRB are performed in this study. Considering the high computational cost of the simulation methods, three Meta-models including the Kriging, radial basis function (RBF), and polynomial response surface (PRSM) are utilized to construct the surrogate models. For this aim, the nonlinear dynamic analyses are conducted on both BRB and SC-BRB by using OpenSees software. The results showed that the SMA area, SMA length ratio, and BRB core area have the most effect on the failure probability of SC-BRB. It is concluded that Kriging-based Monte Carlo Simulation (MCS) gives the best performance to estimate the limit state function (LSF) of BRB and SC-BRB in the reliability analysis procedures. Considering the effects of changing the maximum cyclic loading on the failure probability computation and comparison of the failure probability for different LSFs, it is also found that the reliability indices of SC-BRB were always higher than the corresponding reliability indices determined for BRB which confirms the performance superiority of SC-BRB than BRB.


Vibration ◽  
2020 ◽  
Vol 4 (1) ◽  
pp. 49-63
Author(s):  
Waad Subber ◽  
Sayan Ghosh ◽  
Piyush Pandita ◽  
Yiming Zhang ◽  
Liping Wang

Industrial dynamical systems often exhibit multi-scale responses due to material heterogeneity and complex operation conditions. The smallest length-scale of the systems dynamics controls the numerical resolution required to resolve the embedded physics. In practice however, high numerical resolution is only required in a confined region of the domain where fast dynamics or localized material variability is exhibited, whereas a coarser discretization can be sufficient in the rest majority of the domain. Partitioning the complex dynamical system into smaller easier-to-solve problems based on the localized dynamics and material variability can reduce the overall computational cost. The region of interest can be specified based on the localized features of the solution, user interest, and correlation length of the material properties. For problems where a region of interest is not evident, Bayesian inference can provide a feasible solution. In this work, we employ a Bayesian framework to update the prior knowledge of the localized region of interest using measurements of the system response. Once, the region of interest is identified, the localized uncertainty is propagate forward through the computational domain. We demonstrate our framework using numerical experiments on a three-dimensional elastodynamic problem.


Author(s):  
Yuki Takashima ◽  
Toru Nakashika ◽  
Tetsuya Takiguchi ◽  
Yasuo Ariki

Abstract Voice conversion (VC) is a technique of exclusively converting speaker-specific information in the source speech while preserving the associated phonemic information. Non-negative matrix factorization (NMF)-based VC has been widely researched because of the natural-sounding voice it achieves when compared with conventional Gaussian mixture model-based VC. In conventional NMF-VC, models are trained using parallel data which results in the speech data requiring elaborate pre-processing to generate parallel data. NMF-VC also tends to be an extensive model as this method has several parallel exemplars for the dictionary matrix, leading to a high computational cost. In this study, an innovative parallel dictionary-learning method using non-negative Tucker decomposition (NTD) is proposed. The proposed method uses tensor decomposition and decomposes an input observation into a set of mode matrices and one core tensor. The proposed NTD-based dictionary-learning method estimates the dictionary matrix for NMF-VC without using parallel data. The experimental results show that the proposed method outperforms other methods in both parallel and non-parallel settings.


2021 ◽  
pp. 105678952110339
Author(s):  
Hongyong Jiang ◽  
Yiru Ren ◽  
Qiduo Jin

A novel synergistic multi-scale modeling framework with a coupling of micro- and meso-scale is proposed to predict damage behaviors of 2D-triaxially braided composite (2DTBC). Based on the Bridge model, the internal stress and micro damage of constituent materials are respectively coupled with the stress and damage of tow. The initial effective elastic properties of tow (IEEP) used as the predefined data are estimated by micro-mechanics models. Due to in-situ effects, stress concentration factor (SCF) is considered in the micro matrix, exhibiting progressive damage accumulation. Comparisons of IEEP and strengths between the Bridge and Chamis’ theory are conducted to validate the values of IEEP and SCF. Based on the representative volume element (RVE), the macro properties and damage modes of 2DTBC are predicted to be consistent with available experiments and meso-scale simulation. Both axial and transverse damage mechanisms of 2DTBC under tensile or compressive load are revealed. Micro fiber and matrix damage accumulations have significant effects on the meso-scale axial and transverse damage of tows due to multi-scale coupling effects. Different from existing meso-/multi-scale models, the proposed multi-scale model can capture a crucial phenomenon that the transverse damage of tow is vulnerable to micro fiber fracture. The proposed multi-scale framework provides a robust tool for future systematic studies on constituent materials level to larger-scale aeronautical materials.


2006 ◽  
Vol 04 (03) ◽  
pp. 639-647 ◽  
Author(s):  
ELEAZAR ESKIN ◽  
RODED SHARAN ◽  
ERAN HALPERIN

The common approaches for haplotype inference from genotype data are targeted toward phasing short genomic regions. Longer regions are often tackled in a heuristic manner, due to the high computational cost. Here, we describe a novel approach for phasing genotypes over long regions, which is based on combining information from local predictions on short, overlapping regions. The phasing is done in a way, which maximizes a natural maximum likelihood criterion. Among other things, this criterion takes into account the physical length between neighboring single nucleotide polymorphisms. The approach is very efficient and is applied to several large scale datasets and is shown to be successful in two recent benchmarking studies (Zaitlen et al., in press; Marchini et al., in preparation). Our method is publicly available via a webserver at .


Sign in / Sign up

Export Citation Format

Share Document