An Efficient Implementation of Polymer Viscoelastic Behavior Through a Pseudo Viscoelastic Model

2011 ◽  
Vol 8 (2) ◽  
pp. 83-87
Author(s):  
Sathyanarayanan Raghavan ◽  
Raphael. I. Okereke ◽  
Suresh K. Sitaraman

Modeling of viscoelastic relaxation of polymer materials is important to understand the thermo-mechanical behavior of organic microelectronic systems. However, incorporation of viscoelastic behavior into numerical models makes the models compute-intensive. This paper presents a different technique to incorporate the polymer viscoelastic behavior into the numerical models such that the computation time is not adversely affected without compromising the accuracy of the results obtained. In the proposed “pseudo viscoelastic” modeling technique, the modulus of the viscoelastic material is computed as a function of time and temperature loading history outside of the finite-element simulation, and is then input into the simulation as a thermo-elastic material incorporating the viscoelastic relaxation of the material. This paper compares the warpage results obtained through the proposed technique against a complete viscoelastic simulation model and experimental data, and it is seen that the maximum warpage predicted using the proposed technique agree within 10% compared with the results obtained from a “full” viscoelastic model. Also, it is shown through some of our simulations that the proposed technique could result in a computational time saving of more than 50% and hard disk space saving of 65%.

Water ◽  
2021 ◽  
Vol 13 (23) ◽  
pp. 3435
Author(s):  
Boram Kim ◽  
Kwang Seok Yoon ◽  
Hyung-Jun Kim

In this study, a CUDA Fortran-based GPU-accelerated Laplace equation model was developed and applied to several cases. The Laplace equation is one of the equations that can physically analyze the groundwater flows, and is an equation that can provide analytical solutions. Such a numerical model requires a large amount of data to physically regenerate the flow with high accuracy, and requires computational time. These numerical models require a large amount of data to physically reproduce the flow with high accuracy and require computational time. As a way to shorten the computation time by applying CUDA technology, large-scale parallel computations were performed on the GPU, and a program was written to reduce the number of data transfers between the CPU and GPU. A GPU consists of many ALUs specialized in graphic processing, and can perform more concurrent computations than a CPU using multiple ALUs. The computation results of the GPU-accelerated model were compared with the analytical solution of the Laplace equation to verify the accuracy. The computation results of the GPU-accelerated Laplace equation model were in good agreement with the analytical solution. As the number of grids increased, the computational time of the GPU-accelerated model gradually reduced compared to the computational time of the CPU-based Laplace equation model. As a result, the computational time of the GPU-accelerated Laplace equation model was reduced by up to about 50 times.


Author(s):  
Yousof Azizi ◽  
Patricia Davies ◽  
Anil K. Bajaj

Flexible polyethylene foam is used in many engineering applications. It exhibits nonlinear and viscoelastic behavior which makes it difficult to model. To date, several models have been developed to characterize the complex behavior of foams. These attempts include the computationally intensive microstructural models to continuum models that capture the macroscale behavior of the foam materials. In this research, a nonlinear viscoelastic model, which is an extension to previously developed models, is proposed and its ability to capture foam response in uniaxial compression is investigated. It is hypothesized that total stress can be decomposed into the sum of a nonlinear elastic component, modeled by a higher-order polynomial, and a nonlinear hereditary type viscoelastic component. System identification procedures were developed to estimate the model parameters using uniaxial cyclic compression data from experiments conducted at six different rates. The estimated model parameters for individual tests were used to develop a model with parameters that are a function of strain rates. The parameter estimation technique was modified to also develop a comprehensive model which captures the uniaxial behavior of all six tests. The performance of this model was compared to that of other nonlinear viscoelastic models.


2021 ◽  
Author(s):  
Maha Mdini ◽  
Takemasa Miyoshi ◽  
Shigenori Otsuka

<p>In the era of modern science, scientists have developed numerical models to predict and understand the weather and ocean phenomena based on fluid dynamics. While these models have shown high accuracy at kilometer scales, they are operated with massive computer resources because of their computational complexity.  In recent years, new approaches to solve these models based on machine learning have been put forward. The results suggested that it be possible to reduce the computational complexity by Neural Networks (NNs) instead of classical numerical simulations. In this project, we aim to shed light upon different ways to accelerating physical models using NNs. We test two approaches: Data-Driven Statistical Model (DDSM) and Hybrid Physical-Statistical Model (HPSM) and compare their performance to the classical Process-Driven Physical Model (PDPM). DDSM emulates the physical model by a NN. The HPSM, also known as super-resolution, uses a low-resolution version of the physical model and maps its outputs to the original high-resolution domain via a NN. To evaluate these two methods, we measured their accuracy and their computation time. Our results of idealized experiments with a quasi-geostrophic model [SO3] show that HPSM reduces the computation time by a factor of 3 and it is capable to predict the output of the physical model at high accuracy up to 9.25 days. The DDSM, however, reduces the computation time by a factor of 4 and can predict the physical model output with an acceptable accuracy only within 2 days. These first results are promising and imply the possibility of bringing complex physical models into real time systems with lower-cost computer resources in the future.</p>


2022 ◽  
Vol 16 (1) ◽  
pp. 0-0

Secure and efficient authentication mechanism becomes a major concern in cloud computing due to the data sharing among cloud server and user through internet. This paper proposed an efficient Hashing, Encryption and Chebyshev HEC-based authentication in order to provide security among data communication. With the formal and the informal security analysis, it has been demonstrated that the proposed HEC-based authentication approach provides data security more efficiently in cloud. The proposed approach amplifies the security issues and ensures the privacy and data security to the cloud user. Moreover, the proposed HEC-based authentication approach makes the system more robust and secured and has been verified with multiple scenarios. However, the proposed authentication approach requires less computational time and memory than the existing authentication techniques. The performance revealed by the proposed HEC-based authentication approach is measured in terms of computation time and memory as 26ms, and 1878bytes for 100Kb data size, respectively.


2018 ◽  
Vol 10 (12) ◽  
pp. 168781401881745 ◽  
Author(s):  
Ying Zhang ◽  
Zhanghua Lian ◽  
Mi Zhou ◽  
Tiejun Lin

At the high or extra-high temperatures in a natural gas oilfield, where the premium connection is employed by casing, gas leakage in the wellbore is always detected after several years of gas production. As the viscoelastic material’s mechanical properties change with time and temperature, the relaxation of the contact pressure on the connection sealing surface is the main reason for the gas leakage in the high-temperature gas well. In this article, tension-creep experiments were conducted. Furthermore, a constitutive model of the casing material was established by the Prony series method. Moreover, the Prony series’ shift factor was calculated to study the thermo-rheological behavior of the casing material ranging from 120°C to 300°C. A linear viscoelastic model was implemented in ABAQUS, and the simulation results are compared to our experimental data to validate the methodology. Finally, the viscoelastic finite element model is applied to predict the relaxation of contact pressure on the premium connections’ sealing surface versus time under different temperatures. And, the ratio of the design contact pressure and the intending gas sealing pressure is recommended for avoiding the premium connections failure in the high-temperature gas well.


2010 ◽  
Vol 3 (6) ◽  
pp. 1555-1568 ◽  
Author(s):  
B. Mijling ◽  
O. N. E. Tuinder ◽  
R. F. van Oss ◽  
R. J. van der A

Abstract. The Ozone Profile Algorithm (OPERA), developed at KNMI, retrieves the vertical ozone distribution from nadir spectral satellite measurements of back scattered sunlight in the ultraviolet and visible wavelength range. To produce consistent global datasets the algorithm needs to have good global performance, while short computation time facilitates the use of the algorithm in near real time applications. To test the global performance of the algorithm we look at the convergence behaviour as diagnostic tool of the ozone profile retrievals from the GOME instrument (on board ERS-2) for February and October 1998. In this way, we uncover different classes of retrieval problems, related to the South Atlantic Anomaly, low cloud fractions over deserts, desert dust outflow over the ocean, and the intertropical convergence zone. The influence of the first guess and the external input data including the ozone cross-sections and the ozone climatologies on the retrieval performance is also investigated. By using a priori ozone profiles which are selected on the expected total ozone column, retrieval problems due to anomalous ozone distributions (such as in the ozone hole) can be avoided. By applying the algorithm adaptations the convergence statistics improve considerably, not only increasing the number of successful retrievals, but also reducing the average computation time, due to less iteration steps per retrieval. For February 1998, non-convergence was brought down from 10.7% to 2.1%, while the mean number of iteration steps (which dominates the computational time) dropped 26% from 5.11 to 3.79.


Geophysics ◽  
2013 ◽  
Vol 78 (1) ◽  
pp. V1-V9 ◽  
Author(s):  
Zhonghuan Chen ◽  
Sergey Fomel ◽  
Wenkai Lu

When plane-wave destruction (PWD) is implemented by implicit finite differences, the local slope is estimated by an iterative algorithm. We propose an analytical estimator of the local slope that is based on convergence analysis of the iterative algorithm. Using the analytical estimator, we design a noniterative method to estimate slopes by a three-point PWD filter. Compared with the iterative estimation, the proposed method needs only one regularization step, which reduces computation time significantly. With directional decoupling of the plane-wave filter, the proposed algorithm is also applicable to 3D slope estimation. We present synthetic and field experiments to demonstrate that the proposed algorithm can yield a correct estimation result with shorter computational time.


Author(s):  
Jérôme Limido ◽  
Mohamed Trabia ◽  
Shawoon Roy ◽  
Brendan O’Toole ◽  
Richard Jennings ◽  
...  

A series of experiments were performed to study plastic deformation of metallic plates under hypervelocity impact at the University of Nevada, Las Vegas (UNLV) Center for Materials and Structures using a two-stage light gas gun. In these experiments, cylindrical Lexan projectiles were fired at A36 steel target plates with velocities range of 4.5–6.0 km/s. Experiments were designed to produce a front side impact crater and a permanent bulging deformation on the back surface of the target without inducing complete perforation of the plates. Free surface velocities from the back surface of target plate were measured using the newly developed Multiplexed Photonic Doppler Velocimetry (MPDV) system. To simulate the experiments, a Lagrangian-based smooth particle hydrodynamics (SPH) is typically used to avoid the problems associated with mesh instability. Despite their intrinsic capability for simulation of violent impacts, particle methods have a few drawbacks that may considerably affect their accuracy and performance including, lack of interpolation completeness, tensile instability, and existence of spurious pressure. Moreover, computational time is also a strong limitation that often necessitates the use of reduced 2D axisymmetric models. To address these shortcomings, IMPETUS Afea Solver® implemented a newly developed SPH formulation that can solve the problems regarding spurious pressures and tensile instability. The algorithm takes full advantage of GPU Technology for parallelization of the computation and opens the door for running large 3D models (20,000,000 particles). The combination of accurate algorithms and drastically reduced computation time now makes it possible to run a high fidelity hypervelocity impact model.


Jurnal INKOM ◽  
2014 ◽  
Vol 8 (1) ◽  
pp. 29 ◽  
Author(s):  
Arnida Lailatul Latifah ◽  
Adi Nurhadiyatna

This paper proposes parallel algorithms for precipitation of flood modelling, especially applied in spatial rainfall distribution. As an important input in flood modelling, spatial distribution of rainfall is always needed as a pre-conditioned model. In this paper two interpolation methods, Inverse distance weighting (IDW) and Ordinary kriging (OK) are discussed. Both are developed in parallel algorithms in order to reduce the computational time. To measure the computation efficiency, the performance of the parallel algorithms are compared to the serial algorithms for both methods. Findings indicate that: (1) the computation time of OK algorithm is up to 23% longer than IDW; (2) the computation time of OK and IDW algorithms is linearly increasing with the number of cells/ points; (3) the computation time of the parallel algorithms for both methods is exponentially decaying with the number of processors. The parallel algorithm of IDW gives a decay factor of 0.52, while OK gives 0.53; (4) The parallel algorithms perform near ideal speed-up.


2021 ◽  
Author(s):  
Brett W. Larsen ◽  
Shaul Druckmann

AbstractLateral and recurrent connections are ubiquitous in biological neural circuits. The strong computational abilities of feedforward networks have been extensively studied; on the other hand, while certain roles for lateral and recurrent connections in specific computations have been described, a more complete understanding of the role and advantages of recurrent computations that might explain their prevalence remains an important open challenge. Previous key studies by Minsky and later by Roelfsema argued that the sequential, parallel computations for which recurrent networks are well suited can be highly effective approaches to complex computational problems. Such “tag propagation” algorithms perform repeated, local propagation of information and were introduced in the context of detecting connectedness, a task that is challenging for feedforward networks. Here, we advance the understanding of the utility of lateral and recurrent computation by first performing a large-scale empirical study of neural architectures for the computation of connectedness to explore feedforward solutions more fully and establish robustly the importance of recurrent architectures. In addition, we highlight a tradeoff between computation time and performance and demonstrate hybrid feedforward/recurrent models that perform well even in the presence of varying computational time limitations. We then generalize tag propagation architectures to multiple, interacting propagating tags and demonstrate that these are efficient computational substrates for more general computations by introducing and solving an abstracted biologically inspired decision-making task. More generally, our work clarifies and expands the set of computational tasks that can be solved efficiently by recurrent computation, yielding hypotheses for structure in population activity that may be present in such tasks.Author SummaryLateral and recurrent connections are ubiquitous in biological neural circuits; intriguingly, this stands in contrast to the majority of current-day artificial neural network research which primarily uses feedforward architectures except in the context of temporal sequences. This raises the possibility that part of the difference in computational capabilities between real neural circuits and artificial neural networks is accounted for by the role of recurrent connections, and as a result a more detailed understanding of the computational role played by such connections is of great importance. Making effective comparisons between architectures is a subtle challenge, however, and in this paper we leverage the computational capabilities of large-scale machine learning to robustly explore how differences in architectures affect a network’s ability to learn a task. We first focus on the task of determining whether two pixels are connected in an image which has an elegant and efficient recurrent solution: propagate a connected label or tag along paths. Inspired by this solution, we show that it can be generalized in many ways, including propagating multiple tags at once and changing the computation performed on the result of the propagation. To illustrate these generalizations, we introduce an abstracted decision-making task related to foraging in which an animal must determine whether it can avoid predators in a random environment. Our results shed light on the set of computational tasks that can be solved efficiently by recurrent computation and how these solutions may appear in neural activity.


Sign in / Sign up

Export Citation Format

Share Document