Compact Modeling of a Telecommunication Cabinet

Author(s):  
Aalok Trivedi ◽  
Dereje Agonafer ◽  
Deepak Sivanandan ◽  
Mark Hendrix ◽  
Akbar Sahrapour

Computational Fluid Dynamics (CFD) is widely used in the telecommunication industry to validate experimental data and obtain both qualitative and quantitative results during product development. A typical outdoor telecommunications cabinet requires the modeling of a large number of components in order to perform the required air flow and thermal design. Among these components, the heat exchanger is the most critical to thermal performance. The cabinet heat exchanger and other thermal components make up a complex thermal system. This thermal system must be characterized and optimized in a short time frame to support time-to-market requirements. CFD techniques allow for completing system thermal optimization long before product test data can be available. However, the computational model of the complex thermal system leads to a large mesh count and corresponding lengthy computational times. The objective of this paper is to present an overview of techniques to minimize the computational time for complex designs such as a heat exchanger used in telecommunication cabinets. The discussion herein presents the concepts which lead to developing a compact model of the heat exchanger, reducing the mesh count and thereby the computation time, without compromising the acceptability of the results. The model can be further simplified by identifying the components significantly affecting the physics of the problem and eliminating components that will not adversely affect either the fluid mechanics or heat transfer. This will further reduce the mesh density. Compact modeling, selective meshing, and replacing sub-components with simplified equivalent models all help reduce the overall model size. The model thus developed is compared to a benchmark case without the compact model. Given that the validity of compact models is not generalized, it is expected that this methodology can address this particular class of problems in telecommunications systems. The CFD code FLOTHERM™ by Flomerics is used to carry out the analysis.

2018 ◽  
Author(s):  
Michael W. Hast ◽  
Brett G. Hanson ◽  
Josh R. Baxter

AbstractModeling joint contact is necessary to test many questions using simulation paradigms, but this portion of OpenSim is not well understood. The purpose of this study was to provide a guide for implementing a validated elastic foundation contact model in OpenSim. First, the load-displacement properties of a stainless steel ball bearing and ultra high molecular weight polyethylene (UHMWPE) slab were recorded during a controlled physical experiment. These geometries were imported and into OpenSim and contact mechanics were modeled with the on-board elastic foundation algorithm. Particle swarm optimization was performed to determine the elastic foundation model stiffness (2.14×1011 ± 6.81×109 N/m) and dissipation constants (0.999 ± 0.003). Estimations of contact forces compared favorably with blinded experimental data (root mean square error: 87.58 ± 1.57 N). Last, total knee replacement geometry was used to perform a sensitivity analysis of material stiffness and mesh density with regard to penetration depth and computational time. These simulations demonstrated that material stiffnesses between 1011 and 1012 N/m resulted in realistic penetrations (< 0.15mm) when subjected to 981N loads. Material stiffnesses between 1013 and 1015 N/m increased computation time by factors of 12–23. This study shows the utility of performing a simple physical experiment to tune model parameters when physical components of orthopaedic implants are not available to the researcher. It also demonstrates the efficacy of employing the on-board elastic foundation algorithm to create realistic simulations of contact between orthopaedic implants.


2021 ◽  
Vol 5 (3) ◽  
pp. 36
Author(s):  
Leilei Dong ◽  
Italo Mazzarino ◽  
Alessio Alexiadis

A comprehensive review is carried out on the models and correlations for solid/fluid reactions that result from a complex multi-scale physicochemical process. A simulation of this process with CFD requires various complicated submodels and significant computational time, which often makes it undesirable and impractical in many industrial activities requiring a quick solution within a limited time frame, such as new product/process design, feasibility studies, and the evaluation or optimization of the existing processes, etc. In these circumstances, the existing models and correlations developed in the last few decades are of significant relevance and become a useful simulation tool. However, despite the increasing research interests in this area in the last thirty years, there is no comprehensive review available. This paper is thus motivated to review the models developed so far, as well as provide the selection guidance for model and correlations for the specific application to help engineers and researchers choose the most appropriate model for feasible solutions. Therefore, this review is also of practical relevance to professionals who need to perform engineering design or simulation work. The areas needing further development in solid–fluid reaction modelling are also identified and discussed.


2003 ◽  
Vol 125 (3) ◽  
pp. 319-324 ◽  
Author(s):  
C. B. Coetzer ◽  
J. A. Visser

This paper introduces a compact model to predict the interfin velocity and the resulting pressure drop across a longitudinal fin heat sink with tip bypass. The compact model is based on results obtained from a comprehensive study into the behavior of both laminar and turbulent flow in longitudinal fin heat sinks with tip bypass using CFD analysis. The new compact flow prediction model is critically compared to existing compact models as well as to the results obtained from the CFD simulations. The results indicate that the new compact model shows at least a 4.5% improvement in accuracy predicting the pressure drop over a wide range of heat sink geometries and Reynolds numbers simulated. The improved accuracy in velocity distribution between the fins also increases the accuracy of the calculated heat transfer coefficients applied to the heat sinks.


2022 ◽  
Vol 16 (1) ◽  
pp. 0-0

Secure and efficient authentication mechanism becomes a major concern in cloud computing due to the data sharing among cloud server and user through internet. This paper proposed an efficient Hashing, Encryption and Chebyshev HEC-based authentication in order to provide security among data communication. With the formal and the informal security analysis, it has been demonstrated that the proposed HEC-based authentication approach provides data security more efficiently in cloud. The proposed approach amplifies the security issues and ensures the privacy and data security to the cloud user. Moreover, the proposed HEC-based authentication approach makes the system more robust and secured and has been verified with multiple scenarios. However, the proposed authentication approach requires less computational time and memory than the existing authentication techniques. The performance revealed by the proposed HEC-based authentication approach is measured in terms of computation time and memory as 26ms, and 1878bytes for 100Kb data size, respectively.


2004 ◽  
Vol 126 (2) ◽  
pp. 247-255 ◽  
Author(s):  
Duckjong Kim ◽  
Sung Jin Kim

In the present work, a compact modeling method based on a volume-averaging technique is presented. Its application to an analysis of fluid flow and heat transfer in straight fin heat sinks is then analyzed. In this study, the straight fin heat sink is modeled as a porous medium through which fluid flows. The volume-averaged momentum and energy equations for developing flow in these heat sinks are obtained using the local volume-averaging method. The permeability and the interstitial heat transfer coefficient required to solve these equations are determined analytically from forced convective flow between infinite parallel plates. To validate the compact model proposed in this paper, three aluminum straight fin heat sinks having a base size of 101.43mm×101.43mm are tested with an inlet velocity ranging from 0.5 m/s to 2 m/s. In the experimental investigation, the heat sink is heated uniformly at the bottom. The resulting pressure drop across the heat sink and the temperature distribution at its bottom are then measured and are compared with those obtained through the porous medium approach. Upon comparison, the porous medium approach is shown to accurately predict the pressure drop and heat transfer characteristics of straight fin heat sinks. In addition, evidence indicates that the entrance effect should be considered in the thermal design of heat sinks when Re Dh/L>∼O10.


2010 ◽  
Vol 3 (6) ◽  
pp. 1555-1568 ◽  
Author(s):  
B. Mijling ◽  
O. N. E. Tuinder ◽  
R. F. van Oss ◽  
R. J. van der A

Abstract. The Ozone Profile Algorithm (OPERA), developed at KNMI, retrieves the vertical ozone distribution from nadir spectral satellite measurements of back scattered sunlight in the ultraviolet and visible wavelength range. To produce consistent global datasets the algorithm needs to have good global performance, while short computation time facilitates the use of the algorithm in near real time applications. To test the global performance of the algorithm we look at the convergence behaviour as diagnostic tool of the ozone profile retrievals from the GOME instrument (on board ERS-2) for February and October 1998. In this way, we uncover different classes of retrieval problems, related to the South Atlantic Anomaly, low cloud fractions over deserts, desert dust outflow over the ocean, and the intertropical convergence zone. The influence of the first guess and the external input data including the ozone cross-sections and the ozone climatologies on the retrieval performance is also investigated. By using a priori ozone profiles which are selected on the expected total ozone column, retrieval problems due to anomalous ozone distributions (such as in the ozone hole) can be avoided. By applying the algorithm adaptations the convergence statistics improve considerably, not only increasing the number of successful retrievals, but also reducing the average computation time, due to less iteration steps per retrieval. For February 1998, non-convergence was brought down from 10.7% to 2.1%, while the mean number of iteration steps (which dominates the computational time) dropped 26% from 5.11 to 3.79.


Geophysics ◽  
2013 ◽  
Vol 78 (1) ◽  
pp. V1-V9 ◽  
Author(s):  
Zhonghuan Chen ◽  
Sergey Fomel ◽  
Wenkai Lu

When plane-wave destruction (PWD) is implemented by implicit finite differences, the local slope is estimated by an iterative algorithm. We propose an analytical estimator of the local slope that is based on convergence analysis of the iterative algorithm. Using the analytical estimator, we design a noniterative method to estimate slopes by a three-point PWD filter. Compared with the iterative estimation, the proposed method needs only one regularization step, which reduces computation time significantly. With directional decoupling of the plane-wave filter, the proposed algorithm is also applicable to 3D slope estimation. We present synthetic and field experiments to demonstrate that the proposed algorithm can yield a correct estimation result with shorter computational time.


Author(s):  
Jérôme Limido ◽  
Mohamed Trabia ◽  
Shawoon Roy ◽  
Brendan O’Toole ◽  
Richard Jennings ◽  
...  

A series of experiments were performed to study plastic deformation of metallic plates under hypervelocity impact at the University of Nevada, Las Vegas (UNLV) Center for Materials and Structures using a two-stage light gas gun. In these experiments, cylindrical Lexan projectiles were fired at A36 steel target plates with velocities range of 4.5–6.0 km/s. Experiments were designed to produce a front side impact crater and a permanent bulging deformation on the back surface of the target without inducing complete perforation of the plates. Free surface velocities from the back surface of target plate were measured using the newly developed Multiplexed Photonic Doppler Velocimetry (MPDV) system. To simulate the experiments, a Lagrangian-based smooth particle hydrodynamics (SPH) is typically used to avoid the problems associated with mesh instability. Despite their intrinsic capability for simulation of violent impacts, particle methods have a few drawbacks that may considerably affect their accuracy and performance including, lack of interpolation completeness, tensile instability, and existence of spurious pressure. Moreover, computational time is also a strong limitation that often necessitates the use of reduced 2D axisymmetric models. To address these shortcomings, IMPETUS Afea Solver® implemented a newly developed SPH formulation that can solve the problems regarding spurious pressures and tensile instability. The algorithm takes full advantage of GPU Technology for parallelization of the computation and opens the door for running large 3D models (20,000,000 particles). The combination of accurate algorithms and drastically reduced computation time now makes it possible to run a high fidelity hypervelocity impact model.


Jurnal INKOM ◽  
2014 ◽  
Vol 8 (1) ◽  
pp. 29 ◽  
Author(s):  
Arnida Lailatul Latifah ◽  
Adi Nurhadiyatna

This paper proposes parallel algorithms for precipitation of flood modelling, especially applied in spatial rainfall distribution. As an important input in flood modelling, spatial distribution of rainfall is always needed as a pre-conditioned model. In this paper two interpolation methods, Inverse distance weighting (IDW) and Ordinary kriging (OK) are discussed. Both are developed in parallel algorithms in order to reduce the computational time. To measure the computation efficiency, the performance of the parallel algorithms are compared to the serial algorithms for both methods. Findings indicate that: (1) the computation time of OK algorithm is up to 23% longer than IDW; (2) the computation time of OK and IDW algorithms is linearly increasing with the number of cells/ points; (3) the computation time of the parallel algorithms for both methods is exponentially decaying with the number of processors. The parallel algorithm of IDW gives a decay factor of 0.52, while OK gives 0.53; (4) The parallel algorithms perform near ideal speed-up.


2021 ◽  
Author(s):  
Brett W. Larsen ◽  
Shaul Druckmann

AbstractLateral and recurrent connections are ubiquitous in biological neural circuits. The strong computational abilities of feedforward networks have been extensively studied; on the other hand, while certain roles for lateral and recurrent connections in specific computations have been described, a more complete understanding of the role and advantages of recurrent computations that might explain their prevalence remains an important open challenge. Previous key studies by Minsky and later by Roelfsema argued that the sequential, parallel computations for which recurrent networks are well suited can be highly effective approaches to complex computational problems. Such “tag propagation” algorithms perform repeated, local propagation of information and were introduced in the context of detecting connectedness, a task that is challenging for feedforward networks. Here, we advance the understanding of the utility of lateral and recurrent computation by first performing a large-scale empirical study of neural architectures for the computation of connectedness to explore feedforward solutions more fully and establish robustly the importance of recurrent architectures. In addition, we highlight a tradeoff between computation time and performance and demonstrate hybrid feedforward/recurrent models that perform well even in the presence of varying computational time limitations. We then generalize tag propagation architectures to multiple, interacting propagating tags and demonstrate that these are efficient computational substrates for more general computations by introducing and solving an abstracted biologically inspired decision-making task. More generally, our work clarifies and expands the set of computational tasks that can be solved efficiently by recurrent computation, yielding hypotheses for structure in population activity that may be present in such tasks.Author SummaryLateral and recurrent connections are ubiquitous in biological neural circuits; intriguingly, this stands in contrast to the majority of current-day artificial neural network research which primarily uses feedforward architectures except in the context of temporal sequences. This raises the possibility that part of the difference in computational capabilities between real neural circuits and artificial neural networks is accounted for by the role of recurrent connections, and as a result a more detailed understanding of the computational role played by such connections is of great importance. Making effective comparisons between architectures is a subtle challenge, however, and in this paper we leverage the computational capabilities of large-scale machine learning to robustly explore how differences in architectures affect a network’s ability to learn a task. We first focus on the task of determining whether two pixels are connected in an image which has an elegant and efficient recurrent solution: propagate a connected label or tag along paths. Inspired by this solution, we show that it can be generalized in many ways, including propagating multiple tags at once and changing the computation performed on the result of the propagation. To illustrate these generalizations, we introduce an abstracted decision-making task related to foraging in which an animal must determine whether it can avoid predators in a random environment. Our results shed light on the set of computational tasks that can be solved efficiently by recurrent computation and how these solutions may appear in neural activity.


Sign in / Sign up

Export Citation Format

Share Document