Unsteady Buoyant Jet Simulations Using Dynamic Connection Scheme of Hydrostatic and Non-Hydrostatic Zone

Author(s):  
Masanobu Hasebe ◽  
Shigeru Tabeta

Most of ocean models employ hydrostatic approximation because the horizontal scale is usually much larger than the vertical scale in oceanic phenomena. In hydrostatic approximation, dynamic pressure is neglected and the momentum equation in vertical direction needs not to be solved. But for the phenomena of buoyant jet from the sea bottom such as submarine groundwater discharge, hydrothermal plume and so on, hydrodynamic pressure cannot be neglected and the momentum equation of vertical direction must to be taken into account. Non-hydrostatic analysis requires so much computation time that it is usually difficult to calculate the current field in the wide ocean area by this approach. On the other hand, analysis assuming the hydrostatic approximation needs less computational time and usually gives reasonable results for large scale ocean phenomena such as tidal current. In the present study, the authors developed a new type of ocean model for multi-scale analysis, which conducts hydrostatic analysis for phenomena in wide area and non-hydrostatic analysis for the detail flow around the buoyant jet simultaneously. The application limit of hydrostatic approximation for ocean model was investigated, and a dynamic connection method of hydrostatic zone with non-hydrostatic zone was developed. By theoretical consideration employing parameter δ and ε which represent the ratio of grid size Δz to Δx and the ratio of vertical velocity to horizontal velocity, it was found that hydrostatic approximation can be applied if δε and ε2 are minute. To examine the developed method, simulations for lock-exchange problem and vertical jet under oscillating current were conducted. The result by the present model was similar to that of non-hydrostatic model in the case that hydrostatic approximation was applied on the area of δε<0.005 and ε2<0.005.

Author(s):  
Tsuguki Kinoshita ◽  
Shigeru Tabeta ◽  
Masataka Fujino

Ohmura bay is a typical enclosed estuary located in Kyushu, Japan. In the summer season, strong stratification is formed which brings oxygen-dificient water mass in the bottom layer. For the purpose of restoring water quality in the bay, field experiment of an artificial purification system was carried out. In the experiment, a diffusion pump was installed on the bottom of the bay. The instrument draws in the surface water of lower density and rich oxygen, mixes it with the bottom water of higher density and poor oxygen, and diffuses the mixed water upward. The mixed water is expected to spread along the isopycnic as density current, which will cause resolution of anoxic water in the bottom layer and promote the circulation of nutrients. However, it cannot be said the experiment was successful, and detail analysis by numerical simulation should be necessary in order to design more effective purification system. Most of ocean models employ the hydrostatic approximation because the horizontal scale is usually much larger than the vertical scale in oceanic phenomena. In the hydrostatic approximation, dynamic pressure is neglected and momentum equation of vertical direction need not to be solved. But in the present case, around the purification system, hydrodynamic pressure is not negligible and momentum equation of vertical direction must to be solved (called FULL-3D here). In FULL-3D calculation the time of calculation is much longer compared with using hydrostatic approximation. It is almost impossible to calculate the flow of the whole Ohmura bay by FULL-3D approach. The authors developed a new type of ocean model for multi-scale analysis, which conducts hydrostatic analysis for phenomena in wide area and FULL-3D analysis for the detail flow around the interesting object simultaneously. In order to connect the hydrostatic region and FULL-3D region, nested grid system is employed. Using this combined system, the effect of purification system to the whole bay will be investigated accurately.


2021 ◽  
Author(s):  
Brett W. Larsen ◽  
Shaul Druckmann

AbstractLateral and recurrent connections are ubiquitous in biological neural circuits. The strong computational abilities of feedforward networks have been extensively studied; on the other hand, while certain roles for lateral and recurrent connections in specific computations have been described, a more complete understanding of the role and advantages of recurrent computations that might explain their prevalence remains an important open challenge. Previous key studies by Minsky and later by Roelfsema argued that the sequential, parallel computations for which recurrent networks are well suited can be highly effective approaches to complex computational problems. Such “tag propagation” algorithms perform repeated, local propagation of information and were introduced in the context of detecting connectedness, a task that is challenging for feedforward networks. Here, we advance the understanding of the utility of lateral and recurrent computation by first performing a large-scale empirical study of neural architectures for the computation of connectedness to explore feedforward solutions more fully and establish robustly the importance of recurrent architectures. In addition, we highlight a tradeoff between computation time and performance and demonstrate hybrid feedforward/recurrent models that perform well even in the presence of varying computational time limitations. We then generalize tag propagation architectures to multiple, interacting propagating tags and demonstrate that these are efficient computational substrates for more general computations by introducing and solving an abstracted biologically inspired decision-making task. More generally, our work clarifies and expands the set of computational tasks that can be solved efficiently by recurrent computation, yielding hypotheses for structure in population activity that may be present in such tasks.Author SummaryLateral and recurrent connections are ubiquitous in biological neural circuits; intriguingly, this stands in contrast to the majority of current-day artificial neural network research which primarily uses feedforward architectures except in the context of temporal sequences. This raises the possibility that part of the difference in computational capabilities between real neural circuits and artificial neural networks is accounted for by the role of recurrent connections, and as a result a more detailed understanding of the computational role played by such connections is of great importance. Making effective comparisons between architectures is a subtle challenge, however, and in this paper we leverage the computational capabilities of large-scale machine learning to robustly explore how differences in architectures affect a network’s ability to learn a task. We first focus on the task of determining whether two pixels are connected in an image which has an elegant and efficient recurrent solution: propagate a connected label or tag along paths. Inspired by this solution, we show that it can be generalized in many ways, including propagating multiple tags at once and changing the computation performed on the result of the propagation. To illustrate these generalizations, we introduce an abstracted decision-making task related to foraging in which an animal must determine whether it can avoid predators in a random environment. Our results shed light on the set of computational tasks that can be solved efficiently by recurrent computation and how these solutions may appear in neural activity.


Water ◽  
2021 ◽  
Vol 13 (23) ◽  
pp. 3435
Author(s):  
Boram Kim ◽  
Kwang Seok Yoon ◽  
Hyung-Jun Kim

In this study, a CUDA Fortran-based GPU-accelerated Laplace equation model was developed and applied to several cases. The Laplace equation is one of the equations that can physically analyze the groundwater flows, and is an equation that can provide analytical solutions. Such a numerical model requires a large amount of data to physically regenerate the flow with high accuracy, and requires computational time. These numerical models require a large amount of data to physically reproduce the flow with high accuracy and require computational time. As a way to shorten the computation time by applying CUDA technology, large-scale parallel computations were performed on the GPU, and a program was written to reduce the number of data transfers between the CPU and GPU. A GPU consists of many ALUs specialized in graphic processing, and can perform more concurrent computations than a CPU using multiple ALUs. The computation results of the GPU-accelerated model were compared with the analytical solution of the Laplace equation to verify the accuracy. The computation results of the GPU-accelerated Laplace equation model were in good agreement with the analytical solution. As the number of grids increased, the computational time of the GPU-accelerated model gradually reduced compared to the computational time of the CPU-based Laplace equation model. As a result, the computational time of the GPU-accelerated Laplace equation model was reduced by up to about 50 times.


2021 ◽  
Vol 14 (5) ◽  
pp. 2781-2799
Author(s):  
Pengfei Wang ◽  
Jinrong Jiang ◽  
Pengfei Lin ◽  
Mengrong Ding ◽  
Junlin Wei ◽  
...  

Abstract. A high-resolution (1/20∘) global ocean general circulation model with graphics processing unit (GPU) code implementations is developed based on the LASG/IAP Climate System Ocean Model version 3 (LICOM3) under a heterogeneous-compute interface for portability (HIP) framework. The dynamic core and physics package of LICOM3 are both ported to the GPU, and three-dimensional parallelization (also partitioned in the vertical direction) is applied. The HIP version of LICOM3 (LICOM3-HIP) is 42 times faster than the same number of CPU cores when 384 AMD GPUs and CPU cores are used. LICOM3-HIP has excellent scalability; it can still obtain a speedup of more than 4 on 9216 GPUs compared to 384 GPUs. In this phase, we successfully performed a test of 1/20∘ LICOM3-HIP using 6550 nodes and 26 200 GPUs, and on a large scale, the model's speed was increased to approximately 2.72 simulated years per day (SYPD). By putting almost all the computation processes inside GPUs, the time cost of data transfer between CPUs and GPUs was reduced, resulting in high performance. Simultaneously, a 14-year spin-up integration following phase 2 of the Ocean Model Intercomparison Project (OMIP-2) protocol of surface forcing was performed, and preliminary results were evaluated. We found that the model results had little difference from the CPU version. Further comparison with observations and lower-resolution LICOM3 results suggests that the 1/20∘ LICOM3-HIP can reproduce the observations and produce many smaller-scale activities, such as submesoscale eddies and frontal-scale structures.


2012 ◽  
Vol 9 (2) ◽  
pp. 1599-1649 ◽  
Author(s):  
R. Inghilesi ◽  
L. Ottolenghi ◽  
A. Orasi ◽  
C. Pizzi ◽  
F. Bignami ◽  
...  

Abstract. The aim of this study was to determine the dispersion of passive pollutants associated with the Tiber discharge into the Tyrrhenian Sea using numerical marine dispersion models and satellite data. Numerical results obtained in the simulation of realistic discharge episodes were compared with the corresponding evolution of the spatial distributions of MODIS diffuse light attenuation coefficient at 490 nm (K490), and the results were discussed with reference to the local climate and the seasonal sub-regional circulation regime. The numerical model used for the simulation of the sub-tidal circulation was a Mediterranean sub-regional scale implementation of the Princeton Ocean Model (POM), nested in the large-scale Mediterranean Forecasting System. The nesting method enabled the model to be applied to almost every area in the Mediterranean Sea and also to be used in seasons for which imposing climatological boundary conditions would have been questionable. Dynamical effects on coastal circulation and on water density due to the Tiber discharge were additionally accounted for in the oceanographic model by implementing the river estuary as a point source of a buoyant jet. A Lagrangian particle dispersion model fed with the POM current fields was then run, in order to reproduce the effect of the turbulent transport of passive tracers mixed in the plume with the coastal flow. Two significant episodes of river discharge in both Winter and Summer conditions were discussed in this paper. It was found that the Winter regime was characterized by the presence of a strong coastal jet flowing with the ambient current. In Summer the prevailing wind regime induces coastal downwelling conditions, which tend to confine the riverine waters close to the shore. In such conditions sudden wind reversals due to local weather perturbations, causing strong local upwelling, proved to be an effective way to disperse the tracers offshore, moving the plume from the coast and detaching large pools of freshwater.


Ocean Science ◽  
2012 ◽  
Vol 8 (5) ◽  
pp. 773-786 ◽  
Author(s):  
R. Inghilesi ◽  
L. Ottolenghi ◽  
A. Orasi ◽  
C. Pizzi ◽  
F. Bignami ◽  
...  

Abstract. The aim of this study was to determine the dispersion of passive pollutants associated with the Tiber discharge into the Tyrrhenian Sea using numerical marine dispersion models and satellite data. Numerical results obtained in the simulation of realistic discharge episodes were compared with the corresponding evolution of the spatial distributions of MODIS diffuse light attenuation coefficient at 490 nm (K490), and the results were discussed with reference to the local climate and the seasonal sub-regional circulation regime. The numerical model used for the simulation of the sub-tidal circulation was a Mediterranean sub-regional scale implementation of the Princeton Ocean Model (POM), nested in the large-scale Mediterranean Forecasting System. The nesting method enabled the model to be applied to almost every area in the Mediterranean Sea and also to be used in seasons for which imposing climatological boundary conditions would have been questionable. Dynamical effects on coastal circulation and on water density due to the Tiber discharge were additionally accounted for in the oceanographic model by implementing the river estuary as a point source of a buoyant jet. A Lagrangian particle dispersion model fed with the POM current fields was then run in order to reproduce the effect of the turbulent transport of passive tracers mixed in the plume with the coastal flow. Two significant episodes of river discharge in both winter and summer conditions were discussed in this paper. It was found that the winter regime was characterized by the presence of a strong coastal jet flowing with the ambient current. In summer the prevailing wind regime induced coastal downwelling conditions, which tended to confine the riverine waters close to the shore. In such conditions sudden wind reversals due to local weather perturbations, causing moderate local upwelling, proved to be the only effective way to disperse the tracers offshore, moving the plume from the coast and detaching large pools of freshwater.


2019 ◽  
Author(s):  
Liqun Cao ◽  
Jinzhe Zeng ◽  
Mingyuan Xu ◽  
Chih-Hao Chin ◽  
Tong Zhu ◽  
...  

Combustion is a kind of important reaction that affects people's daily lives and the development of aerospace. Exploring the reaction mechanism contributes to the understanding of combustion and the more efficient use of fuels. Ab initio quantum mechanical (QM) calculation is precise but limited by its computational time for large-scale systems. In order to carry out reactive molecular dynamics (MD) simulation for combustion accurately and quickly, we develop the MFCC-combustion method in this study, which calculates the interaction between atoms using QM method at the level of MN15/6-31G(d). Each molecule in systems is treated as a fragment, and when the distance between any two atoms in different molecules is greater than 3.5 Å, a new fragment involved two molecules is produced in order to consider the two-body interaction. The deviations of MFCC-combustion from full system calculations are within a few kcal/mol, and the result clearly shows that the calculated energies of the different systems using MFCC-combustion are close to converging after the distance thresholds are larger than 3.5 Å for the two-body QM interactions. The methane combustion was studied with the MFCC-combustion method to explore the combustion mechanism of the methane-oxygen system.


2018 ◽  
Author(s):  
Pavel Pokhilko ◽  
Evgeny Epifanovsky ◽  
Anna I. Krylov

Using single precision floating point representation reduces the size of data and computation time by a factor of two relative to double precision conventionally used in electronic structure programs. For large-scale calculations, such as those encountered in many-body theories, reduced memory footprint alleviates memory and input/output bottlenecks. Reduced size of data can lead to additional gains due to improved parallel performance on CPUs and various accelerators. However, using single precision can potentially reduce the accuracy of computed observables. Here we report an implementation of coupled-cluster and equation-of-motion coupled-cluster methods with single and double excitations in single precision. We consider both standard implementation and one using Cholesky decomposition or resolution-of-the-identity of electron-repulsion integrals. Numerical tests illustrate that when single precision is used in correlated calculations, the loss of accuracy is insignificant and pure single-precision implementation can be used for computing energies, analytic gradients, excited states, and molecular properties. In addition to pure single-precision calculations, our implementation allows one to follow a single-precision calculation by clean-up iterations, fully recovering double-precision results while retaining significant savings.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Tao Yue ◽  
Da Zhao ◽  
Duc T. T. Phan ◽  
Xiaolin Wang ◽  
Joshua Jonghyun Park ◽  
...  

AbstractThe vascular network of the circulatory system plays a vital role in maintaining homeostasis in the human body. In this paper, a novel modular microfluidic system with a vertical two-layered configuration is developed to generate large-scale perfused microvascular networks in vitro. The two-layer polydimethylsiloxane (PDMS) configuration allows the tissue chambers and medium channels not only to be designed and fabricated independently but also to be aligned and bonded accordingly. This method can produce a modular microfluidic system that has high flexibility and scalability to design an integrated platform with multiple perfused vascularized tissues with high densities. The medium channel was designed with a rhombic shape and fabricated to be semiclosed to form a capillary burst valve in the vertical direction, serving as the interface between the medium channels and tissue chambers. Angiogenesis and anastomosis at the vertical interface were successfully achieved by using different combinations of tissue chambers and medium channels. Various large-scale microvascular networks were generated and quantified in terms of vessel length and density. Minimal leakage of the perfused 70-kDa FITC-dextran confirmed the lumenization of the microvascular networks and the formation of tight vertical interconnections between the microvascular networks and medium channels in different structural layers. This platform enables the culturing of interconnected, large-scale perfused vascularized tissue networks with high density and scalability for a wide range of multiorgan-on-a-chip applications, including basic biological studies and drug screening.


2019 ◽  
Vol 17 (06) ◽  
pp. 947-975 ◽  
Author(s):  
Lei Shi

We investigate the distributed learning with coefficient-based regularization scheme under the framework of kernel regression methods. Compared with the classical kernel ridge regression (KRR), the algorithm under consideration does not require the kernel function to be positive semi-definite and hence provides a simple paradigm for designing indefinite kernel methods. The distributed learning approach partitions a massive data set into several disjoint data subsets, and then produces a global estimator by taking an average of the local estimator on each data subset. Easy exercisable partitions and performing algorithm on each subset in parallel lead to a substantial reduction in computation time versus the standard approach of performing the original algorithm on the entire samples. We establish the first mini-max optimal rates of convergence for distributed coefficient-based regularization scheme with indefinite kernels. We thus demonstrate that compared with distributed KRR, the concerned algorithm is more flexible and effective in regression problem for large-scale data sets.


Sign in / Sign up

Export Citation Format

Share Document