scholarly journals Influence of Interfacial Force Models and Population Balance Models on the kLa Value in Stirred Bioreactors

Processes ◽  
2021 ◽  
Vol 9 (7) ◽  
pp. 1185
Author(s):  
Stefan Seidel ◽  
Dieter Eibl

Optimal oxygen supply is vitally important for the cultivation of aerobically growing cells, as it has a direct influence on cell growth and product formation. A process engineering parameter directly related to oxygen supply is the volumetric oxygen mass transfer coefficient kLa. It is the influences on kLa and computing time of different interfacial force and population balance models in stirred bioreactors that have been evaluated in this study. For this investigation, the OpenFOAM 7 open-source toolbox was utilized. Firstly, the Euler–Euler model with a constant bubble diameter was applied to a 2L scale bioreactor to statistically examine the influence of different interfacial models on the kLa value. It was shown that the kL model and the constant bubble diameter have the greatest influence on the calculated kLa value. To eliminate the problem of a constant bubble diameter and to take effects such as bubble breakup and coalescence into account, the Euler–Euler model was coupled with population balance models (PBM). For this purpose, four coalescence and five bubble breakup models were examined. Ultimately, it was established that, for all of the models tested, coupling computational fluid dynamics (CFD) with PBM resulted in better agreement with the experimental data than using the Euler–Euler model. However, it should be noted that the higher accuracy of the PBM coupled models requires twice the computation time.

Author(s):  
A. Fujiwara ◽  
K. Okamoto ◽  
K. Hashiguchi ◽  
J. Peixinho ◽  
S. Takagi ◽  
...  

Microbubble generation techniques have been proposed in former investigations. Here, we study an effective technique using air bubbly flow into a convergent-divergent nozzle (venturi tube). Pressure change in the diverging section induces bubble breakup. The purpose of this study is to clarify the effect of flow velocity at the throat with respect to the bubble breakup process and the bubble behavior in a venturi tube. Relations between generated bubble diameter and bubble breakup process are also described. Using high speed camera for detailed observation of bubble behavior, the following features were obtained. The velocity at the throat is expected to be of the order of the magnitude of the speed of sound of bubbly flow and a drastic bubble expansion and a shrink is induced. Besides, a liquid column appeared after the bubble flowing into the throat, and it grew up to stick to the bubble like in the form of a jet. This jet induced both unstable surface waves and the breakup of a single large bubble into several pieces.


Author(s):  
Tianxiang Liu ◽  
Li Mao ◽  
Mats-Erik Pistol ◽  
Craig Pryor

Abstract Calculating the electronic structure of systems involving very different length scales presents a challenge. Empirical atomistic descriptions such as pseudopotentials or tight-binding models allow one to calculate the effects of atomic placements, but the computational burden increases rapidly with the size of the system, limiting the ability to treat weakly bound extended electronic states. Here we propose a new method to connect atomistic and quasi-continuous models, thus speeding up tight-binding calculations for large systems. We divide a structure into blocks consisting of several unit cells which we diagonalize individually. We then construct a tight-binding Hamiltonian for the full structure using a truncated basis for the blocks, ignoring states having large energy eigenvalues and retaining states with an energy close to the band edge energies. A numerical test using a GaAs/AlAs quantum well shows the computation time can be decreased to less than 5% of the full calculation with errors of less than 1%. We give data for the trade-offs between computing time and loss of accuracy. We also tested calculations of the density of states for a GaAs/AlAs quantum well and find a ten times speedup without much loss in accuracy.


Geophysics ◽  
2009 ◽  
Vol 74 (4) ◽  
pp. S67-S74 ◽  
Author(s):  
Jun Cao ◽  
Ru-Shan Wu

Wave-equation-based acquisition aperture correction in the local angle domain can improve image amplitude significantly in prestack depth migration. However, its original implementation is inefficient because the wavefield decomposition uses the local slant stack (LSS), which is demanding computationally. We propose a faster method to obtain the image and amplitude correction factor in the local angle domain using beamlet decomposition in the local wavenumber domain. For a given frequency, the image matrix in the local wavenumber domain for all shots can be calculated efficiently. We then transform the shot-summed image matrix from the local wavenumber domain to the local angle domain (LAD). The LAD amplitude correction factor can be obtained with a similar strategy. Having a calculated image and correction factor, one can apply similar acquisition aperture corrections to the original LSS-based method. For the new implementation, we compare the accuracy and efficiency of two beamlet decompositions: Gabor-Daubechies frame (GDF) and local exponential frame (LEF). With both decompositions, our method produces results similar to the original LSS-based method. However, our method can be more than twice as fast as LSS and cost only twice the computation time of traditional one-way wave-equation-based migrations. The results from GDF decomposition are superior to those from LEF decomposition in terms of artifacts, although GDF requires a little more computing time.


Genes ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 53
Author(s):  
Zaid Al-Ars ◽  
Saiyi Wang ◽  
Hamid Mushtaq

The rapid proliferation of low-cost RNA-seq data has resulted in a growing interest in RNA analysis techniques for various applications, ranging from identifying genotype–phenotype relationships to validating discoveries of other analysis results. However, many practical applications in this field are limited by the available computational resources and associated long computing time needed to perform the analysis. GATK has a popular best practices pipeline specifically designed for variant calling RNA-seq analysis. Some tools in this pipeline are not optimized to scale the analysis to multiple processors or compute nodes efficiently, thereby limiting their ability to process large datasets. In this paper, we present SparkRA, an Apache Spark based pipeline to efficiently scale up the GATK RNA-seq variant calling pipeline on multiple cores in one node or in a large cluster. On a single node with 20 hyper-threaded cores, the original pipeline runs for more than 5 h to process a dataset of 32 GB. In contrast, SparkRA is able to reduce the overall computation time of the pipeline on the same single node by about 4×, reducing the computation time down to 1.3 h. On a cluster with 16 nodes (each with eight single-threaded cores), SparkRA is able to further reduce this computation time by 7.7× compared to a single node. Compared to other scalable state-of-the-art solutions, SparkRA is 1.2× faster while achieving the same accuracy of the results.


2019 ◽  
Author(s):  
Lars Nerger ◽  
Qi Tang ◽  
Longjiang Mu

Abstract. Data assimilation integrates information from observational measurements with numerical models. When used with coupled models of Earth system compartments, e.g. the atmosphere and the ocean, consistent joint states can be estimated. A common approach for data assimilation are ensemble-based methods which use an ensemble of state realizations to estimate the state and its uncertainty. These methods are far more costly to compute than a single coupled model because of the required integration of the ensemble. However, with uncoupled models, the methods also have been shown to exhibit a particularly good scaling behavior. This study discusses an approach to augment a coupled model with data assimilation functionality provided by the Parallel Data Assimilation Framework (PDAF). Using only minimal changes in the codes of the different compartment models, a particularly efficient data assimilation system is generated that utilizes parallelization and in-memory data transfers between the models and the data assimilation functions and hence avoids most of the filter reading and writing and also model restarts during the data assimilation process. The study explains the required modifications of the programs on the example of the coupled atmosphere-sea ice-ocean model AWI-CM. Using the case of the assimilation of oceanic observations shows that the data assimilation leads only small overheads in computing time of about 15 % compared to the model without data assimilation and a very good parallel scalability. The model-agnostic structure of the assimilation software ensures a separation of concerns in that the development of data assimilation methods and be separated from the model application.


Author(s):  
D. Geoff Rideout ◽  
Jeffrey L. Stein ◽  
Loucas S. Louca

Accurate estimation of engine vibrations is essential in the design of new engines, engine mounts, and the vehicle frames to which they are attached. Mount force prediction has traditionally been simplified by assuming that the reciprocating dynamics of the engine can be decoupled from the three-dimensional motion of the block. The accuracy of the resulting one-way coupled models decreases as engine imbalance and cylinder-to-cylinder variations increase. Further, the form of the one-way coupled model must be assumed a priori, and there is no mechanism for generating an intermediate-complexity model if the one-way coupled model has insufficient fidelity. In this paper, a new dynamic system model decoupling algorithm is applied to a Detroit Diesel Series 60 in-line six-cylinder engine model to test one-way coupling assumptions and to automate generation of a proper model for mount force prediction. The algorithm, which identifies and removes unnecessary constraint equation terms, is reviewed with the aid of an illustrative example. A fully coupled, balanced rigid body model with no cylinder-to-cylinder variations is then constructed, from which x, y, and z force components at the left-rear, right-rear, and front engine mounts are predicted. The decoupling algorithm is then applied to automatically generate a reduced model in which reciprocating dynamics and gross block motion are decoupled. The amplitudes of the varying components of the force time series are predicted to within 8%, with computation time reduced by 55%. The combustion pressure profile in one cylinder is then changed to represent a misfire that creates imbalance. The decoupled model generated by the algorithm is significantly more robust to imbalance than the traditional one-way coupled models in the literature; however, the vertical component of the front mount force is poorly predicted. Reapplication of the algorithm identifies constraint equation terms that must be reinstated. A new, nondecoupled model is generated that accurately predicts all mount components in the presence of the misfire, with computation time reduced by 39%. The algorithm can be easily reapplied, and a new model generated, whenever engine speed or individual cylinder parameters are changed.


The growth of cloud based remote healthcare and diagnosis services has resulted, Medical Service Providers (MSP) to share diagnositic data across diverse environement. This medical data are accessed across diverse platforms, such as, mobile and web services which needs huge memory for storage. Compression technique helps to address and solve storage requirements and provides for sharing medical data over transmission medium. Loss of data is not acceptable for medical image processing. As a result, this work considers lossless compression for medical in particular and in general any greyscale images. Modified Huffman encoding (MH) is one of the widely used technique for achieving lossless compression. However, due to longer bit length of codewords the existing Modified Huffman (MH) encoding technique is not efficient for medical imaging processing. Firstly, this work presents Modified Refined Huffman (MRH) for performing compression of greyscale and binary images by using diagonal scanning method. Secondly, to minimize the computing time parallel encoding method is used. Experiments are conducted for wide variety of images and performance is evaluated in terms of Compression Ratio, Computation Time and Memory Utilization. The proposed MRH achieves significant performance improvement in terms of Compression Ratio, Computation Time and Memory Usage over its state-of-the-art techniques, such as, LZW, CCITT G4, JBIG2 and Levenberg–Marquardt (LM) Neural Network algorithm. The overall results achieved show the applicability of MRH for different application services.


2021 ◽  
Author(s):  
Jinsu Nam ◽  
Jaehee Lyu ◽  
Junyoung Park

Abstract There are computation time constraints caused by the number and size of particles in the powder packing simulation using DEM. In this paper, newly suggested packing model transforms a general packing sequence –particle generation, stack, and compression – into particle generation and packing by growing particles. To verify the new packing model, it was compared using three contact models widely used in DEM, in terms of Radial Distribution Function, porosity, and Coordination Number. As a result, contact between particles showed a similar trend, and the pore distribution was also similar. Using the new packing model can reduce simulation time by 400% compared to the normal packing model without any other coarse graining methods. This model has only been applied to particle packing simulations in this paper, but it can be expanded to other simulations with complex domain based on DEM.


2021 ◽  
Author(s):  
Gert Leyssen ◽  
Els van Uytven ◽  
Joris Blanckaert ◽  
Roeland Adams ◽  
Tim Franken ◽  
...  

<p>Progressing towards a sustainable society implies the availability of reliable boundary conditions for various hydrodynamic flood models, including an extensive consideration of uncertainties. With an ever growing availability of data and models, the uncertainty sources are constantly increasing. Hence, an elaborate uncertainty analysis strategy has become a must. One way to deal with part of this uncertainty is by applying an ensemble approach, using different hydrological models in combination with various climate scenarios. However, impact modellers may find the growing number and the increasing length of input series for hydraulic models more challenging, since computing time, reliability of the analysis and project deadlines can cause a conflicting situation. In this context, there is a need for approaches that offer a compromise between computing the vast amount of long input series and adequately addressing the uncertainties within a reasonable time span. We present an approach which reduces the computation time, but simultaneously recognises the importance of robust results and the consideration of the different sources of uncertainty. By a stratification of the probability domain for extreme events (discharges, water levels,…) a set of hydrodynamic boundary conditions is generated. Each of these synthetic events gets a probability of occurrence, which changes according to either the considered confidence level or the considered ensemble member. In addition to the stratification approach, a tool for selecting synthetic events for design is developed. This tool allows end-users to create a subset of synthetic events which can be used as design events for a specific area and are representative for the full set of events. The approach is demonstrated for the River Dender catchment in Flanders using 40 years of hydro-meteorological data, an ensemble of 3 hydrological models and a detailed hydraulic model.</p>


2013 ◽  
Vol 284-287 ◽  
pp. 2463-2467
Author(s):  
Chin Fa Hsieh ◽  
Tsung Han Tsai ◽  
Cheng Chung Liu

In this paper, we propose an efficient VLSI architecture for implementing the forward two-dimensional discrete wavelet transform (2D DWT), which is computed without utilizing the traditional method of rows-by-columns or columns-by-rows. On account of the relation form within the original data, we apply masks of different window sizes to the transform and design the architecture based on these different window masks. On the comparison of the computing time, the proposed architecture requires only N*N/4 clock cycles for an N*N image, while it takes N*N clock cycles for the traditional row-by-column/column-by-row 2D DWT. The proposed architecture has a better performance than other designs reported in the literature.


Sign in / Sign up

Export Citation Format

Share Document