Application of polyhedral meshing strategy in indoor environment simulation: Model accuracy and computing time

2021 ◽  
pp. 1420326X2110276
Author(s):  
Haofu Chen ◽  
Xiaoqing Zhou ◽  
Zhuangbo Feng ◽  
Shi-Jie Cao

Computational fluid dynamics (CFD) has been proven to be a versatile tool for indoor environment simulations. The discretization of computational domain (mesh generation) determines the reliability of CFD simulation and computational cost. For cases with complex/irregular geometries, the widely utilized tetrahedral meshes have critical limitations, including low accuracy, considerate grid number and computing cost. To overcome these disadvantages, the current research developed a polyhedral meshes based meshing strategy for indoor environment simulations. This study applied tetrahedral, hexahedral and polyhedral meshes for evaluating indoor environment cases developed from previous studies. Simulation accuracy, computing time and physical storage of different mesh types were compared. The results show that the polyhedral meshes could save almost 95% of computing time without sacrificing model accuracy, compared with the other two mesh types with the approximately same grid numbers. Due to its large mesh information, the polyhedral meshes occupied the most physical memory. Overall, the polyhedral meshes based meshing strategy produced a superior performance (model accuracy and computing time) for indoor environment simulations and shows a great potential in engineering applications.

2022 ◽  
pp. 002199832110635
Author(s):  
Junhong Zhu ◽  
Tim Frerich ◽  
Adli Dimassi ◽  
Michael Koerdt ◽  
Axel S. Herrmann

Structural aerospace composite parts are commonly cured through autoclave processing. To optimize the autoclave process, manufacturing process simulations have been increasingly used to investigate the thermal behavior of the cure assembly. Performing such a simulation, computational fluid dynamics (CFD) coupled with finite element method (FEM) model can be used to deal with the conjugate heat transfer problem between the airflow and solid regions inside the autoclave. A transient CFD simulation requires intensive computing resources. To avoid a long computing time, a quasi-transient coupling approach is adopted to allow a significant acceleration of the simulation process. This approach has been validated for a simple geometry in a previous study. This paper provides an experimental and numerical study on heat transfer in a medium-sized autoclave for a more complicated loading condition and a composite structure, a curved shell with three stringers, that mocks the fuselage structure of an aircraft. Two lumped mass calorimeters are used for the measurement of the heat transfer coefficients (HTCs) during the predefined curing cycle. Owing to some uncertainty in the inlet flow velocity, a correction parameter and calibration method are proposed to reduce the numerical error. The simulation results are compared to the experimental results, which consist of thermal measurements and temperature distributions of the composite shell, to validate the simulation model. This study shows the capability and potential of the quasi-transient coupling approach for the modeling of heat transfer in autoclave processing with reduced computational cost and high correlation between the experimental and numerical results.


Author(s):  
Chen Qi ◽  
Shibo Shen ◽  
Rongpeng Li ◽  
Zhifeng Zhao ◽  
Qing Liu ◽  
...  

AbstractNowadays, deep neural networks (DNNs) have been rapidly deployed to realize a number of functionalities like sensing, imaging, classification, recognition, etc. However, the computational-intensive requirement of DNNs makes it difficult to be applicable for resource-limited Internet of Things (IoT) devices. In this paper, we propose a novel pruning-based paradigm that aims to reduce the computational cost of DNNs, by uncovering a more compact structure and learning the effective weights therein, on the basis of not compromising the expressive capability of DNNs. In particular, our algorithm can achieve efficient end-to-end training that transfers a redundant neural network to a compact one with a specifically targeted compression rate directly. We comprehensively evaluate our approach on various representative benchmark datasets and compared with typical advanced convolutional neural network (CNN) architectures. The experimental results verify the superior performance and robust effectiveness of our scheme. For example, when pruning VGG on CIFAR-10, our proposed scheme is able to significantly reduce its FLOPs (floating-point operations) and number of parameters with a proportion of 76.2% and 94.1%, respectively, while still maintaining a satisfactory accuracy. To sum up, our scheme could facilitate the integration of DNNs into the common machine-learning-based IoT framework and establish distributed training of neural networks in both cloud and edge.


Vibration ◽  
2020 ◽  
Vol 4 (1) ◽  
pp. 49-63
Author(s):  
Waad Subber ◽  
Sayan Ghosh ◽  
Piyush Pandita ◽  
Yiming Zhang ◽  
Liping Wang

Industrial dynamical systems often exhibit multi-scale responses due to material heterogeneity and complex operation conditions. The smallest length-scale of the systems dynamics controls the numerical resolution required to resolve the embedded physics. In practice however, high numerical resolution is only required in a confined region of the domain where fast dynamics or localized material variability is exhibited, whereas a coarser discretization can be sufficient in the rest majority of the domain. Partitioning the complex dynamical system into smaller easier-to-solve problems based on the localized dynamics and material variability can reduce the overall computational cost. The region of interest can be specified based on the localized features of the solution, user interest, and correlation length of the material properties. For problems where a region of interest is not evident, Bayesian inference can provide a feasible solution. In this work, we employ a Bayesian framework to update the prior knowledge of the localized region of interest using measurements of the system response. Once, the region of interest is identified, the localized uncertainty is propagate forward through the computational domain. We demonstrate our framework using numerical experiments on a three-dimensional elastodynamic problem.


Author(s):  
Wei Zhang ◽  
Saad Ahmed ◽  
Jonathan Hong ◽  
Zoubeida Ounaies ◽  
Mary Frecker

Different types of active materials have been used to actuate origami-inspired self-folding structures. To model the highly nonlinear deformation and material responses, as well as the coupled field equations and boundary conditions of such structures, high-fidelity models such as finite element (FE) models are needed but usually computationally expensive, which makes optimization intractable. In this paper, a computationally efficient two-stage optimization framework is developed as a systematic method for the multi-objective designs of such multifield self-folding structures where the deformations are concentrated in crease-like areas, active and passive materials are assumed to behave linearly, and low- and high-fidelity models of the structures can be developed. In Stage 1, low-fidelity models are used to determine the topology of the structure. At the end of Stage 1, a distance measure [Formula: see text] is applied as the metric to determine the best design, which then serves as the baseline design in Stage 2. In Stage 2, designs are further optimized from the baseline design with greatly reduced computing time compared to a full FEA-based topology optimization. The design framework is first described in a general formulation. To demonstrate its efficacy, this framework is implemented in two case studies, namely, a three-finger soft gripper actuated using a PVDF-based terpolymer, and a 3D multifield example actuated using both the terpolymer and a magneto-active elastomer, where the key steps are elaborated in detail, including the variable filter, metrics to select the best design, determination of design domains, and material conversion methods from low- to high-fidelity models. In this paper, analytical models and rigid body dynamic models are developed as the low-fidelity models for the terpolymer- and MAE-based actuations, respectively, and the FE model of the MAE-based actuation is generalized from previous work. Additional generalizable techniques to further reduce the computational cost are elaborated. As a result, designs with better overall performance than the baseline design were achieved at the end of Stage 2 with computing times of 15 days for the gripper and 9 days for the multifield example, which would rather be over 3 and 2 months for full FEA-based optimizations, respectively. Tradeoffs between the competing design objectives were achieved. In both case studies, the efficacy and computational efficiency of the two-stage optimization framework are successfully demonstrated.


2021 ◽  
Vol 11 (2) ◽  
pp. 23
Author(s):  
Duy-Anh Nguyen ◽  
Xuan-Tu Tran ◽  
Francesca Iacopi

Deep Learning (DL) has contributed to the success of many applications in recent years. The applications range from simple ones such as recognizing tiny images or simple speech patterns to ones with a high level of complexity such as playing the game of Go. However, this superior performance comes at a high computational cost, which made porting DL applications to conventional hardware platforms a challenging task. Many approaches have been investigated, and Spiking Neural Network (SNN) is one of the promising candidates. SNN is the third generation of Artificial Neural Networks (ANNs), where each neuron in the network uses discrete spikes to communicate in an event-based manner. SNNs have the potential advantage of achieving better energy efficiency than their ANN counterparts. While generally there will be a loss of accuracy on SNN models, new algorithms have helped to close the accuracy gap. For hardware implementations, SNNs have attracted much attention in the neuromorphic hardware research community. In this work, we review the basic background of SNNs, the current state and challenges of the training algorithms for SNNs and the current implementations of SNNs on various hardware platforms.


2016 ◽  
Vol 846 ◽  
pp. 85-90 ◽  
Author(s):  
Mostafa Odabaee ◽  
Emilie Sauret ◽  
Kamel Hooman

The present study explores CFD analysis of a supercritical carbon dioxide (SCO2) radial-inflow turbine generating 100kW from a concentrated solar resource of 560oC with a pressure ratio of 2.2. Two methods of real gas property estimations including real gas equation of estate and real gas property (RGP) file - generating a required table from NIST REFPROP - were used. Comparing the numerical results and time consumption of both methods, it was shown that equation of states could insert a significant error in thermodynamic property prediction. Implementing the RGP table method indicated a very good agreement with NIST REFPROP while it had slightly more computational cost compared to the RGP table method.


Author(s):  
C. Klein ◽  
S. Reitenbach ◽  
D. Schoenweitz ◽  
F. Wolters

Due to a high degree of complexity and computational effort, overall system simulations of jet engines are typically performed as 0-dimensional thermodynamic performance analysis. Within these simulations and especially in the early cycle design phase, the usage of generic component characteristics is common practice. Of course these characteristics often cannot account for true engine component geometries and operating characteristics which may cause serious deviations between simulated and actual component and overall system performance. This leads to the approach of multi-fidelity simulation, often referred to as zooming, where single components of the thermodynamic cycle model are replaced by higher-order procedures. Hereby the consideration of actual component geometries and performance in an overall system context is enabled and global optimization goals may be considered in the engine design process. The purpose of this study is to present a fully automated approach for the integration of a 3D-CFD component simulation into a thermodynamic overall system simulation. As a use case, a 0D-performance model of the IAE-V2527 engine is combined with a CFD model of the appropriate fan component. The methodology is based on the DLR in-house performance synthesis and preliminary design environment GTlab combined with the DLR in-house CFD solver TRACE. Both, the performance calculation as well as the CFD simulation are part of a fully automated process chain within the GTlab environment. The exchange of boundary conditions between the different fidelity levels is accomplished by operating both simulation procedures on a central data model which is one of the essential parts of GTlab. Furthermore iteration management, progress monitoring as well as error handling are part of the GTlab process control environment. Based on the CFD results comprising fan efficiency, pressure ratio and mass flow, a map scaling methodology as it is commonly used for engine condition monitoring purposes is applied within the performance simulation. Hereby the operating behavior of the CFD fan model can be easily transferred into the overall system simulation which consequently leads to a divergent operating characteristic of the fan module. For this reason, all other engine components will see a shift in their operating conditions even in case of otherwise constant boundary conditions. The described simulation procedure is carried out for characteristic operating conditions of the engine.


2014 ◽  
Vol 989-994 ◽  
pp. 2232-2236 ◽  
Author(s):  
Jia Zhi Dong ◽  
Yu Wen Wang ◽  
Feng Wei ◽  
Jiang Yu

Currently, there is an urgent need for indoor positioning technology. Considering the complexity of indoor environment, this paper proposes a new positioning algorithm (N-CHAN) via the analysis of the error of arrival time positioning (TOA) and the channels of S-V model. It overcomes an obvious shortcoming that the accuracy of traditional CHAN algorithm effected by no-line-of-sight (NLOS). Finally, though MATLAB software simulation, we prove that N-CHAN’s superior performance in NLOS in the S-V channel model, which has a positioning accuracy of centimeter-level and can effectively eliminate the influence of NLOS error on positioning accuracy. Moreover, the N-CHAN can effectively improve the positioning accuracy of the system, especially in the conditions of larger NLOS error.


2021 ◽  
Author(s):  
Carlo Cristiano Stabile ◽  
Marco Barbiero ◽  
Giorgio Fighera ◽  
Laura Dovera

Abstract Optimizing well locations for a green field is critical to mitigate development risks. Performing such workflows with reservoir simulations is very challenging due to the huge computational cost. Proxy models can instead provide accurate estimates at a fraction of the computing time. This study presents an application of new generation functional proxies to optimize the well locations in a real oil field with respect to the actualized oil production on all the different geological realizations. Proxies are built with the Universal Trace Kriging and are functional in time allowing to actualize oil flows over the asset lifetime. Proxies are trained on the reservoir simulations using randomly sampled well locations. Two proxies are created for a pessimistic model (P10) and a mid-case model (P50) to capture the geological uncertainties. The optimization step uses the Non-dominated Sorting Genetic Algorithm, with discounted oil productions of the two proxies, as objective functions. An adaptive approach was employed: optimized points found from a first optimization were used to re-train the proxy models and a second run of optimization was performed. The methodology was applied on a real oil reservoir to optimize the location of four vertical production wells and compared against reference locations. 111 geological realizations were available, in which one relevant uncertainty is the presence of possible compartments. The decision space represented by the horizontal translation vectors for each well was sampled using Plackett-Burman and Latin-Hypercube designs. A first application produced a proxy with poor predictive quality. Redrawing the areas to avoid overlaps and to confine the decision space of each well in one compartment, improved the quality. This suggests that the proxy predictive ability deteriorates in presence of highly non-linear responses caused by sealing faults or by well interchanging positions. We then followed a 2-step adaptive approach: a first optimization was performed and the resulting Pareto front was validated with reservoir simulations; to further improve the proxy quality in this region of the decision space, the validated Pareto front points were added to the initial dataset to retrain the proxy and consequently rerun the optimization. The final well locations were validated on all 111 realizations with reservoir simulations and resulted in an overall increase of the discounted production of about 5% compared to the reference development strategy. The adaptive approach, combined with functional proxy, proved to be successful in improving the workflow by purposefully increasing the training set samples with data points able to enhance the optimization step effectiveness. Each optimization run performed relies on about 1 million proxy evaluations which required negligible computational time. The same workflow carried out with standard reservoir simulations would have been practically unfeasible.


Sign in / Sign up

Export Citation Format

Share Document