scholarly journals A new entropy-variable-based discretization method for minimum entropy moment approximations of linear kinetic equations

Author(s):  
Tobias Leibner ◽  
Mario Ohlberger

In this contribution we derive and analyze a new numerical method for kinetic equations based on a variable transformation of the moment approximation. Classical minimum-entropy moment closures are a class of reduced models for kinetic equations that conserve many of the fundamental physical properties of solutions. However, their practical use is limited by their high computational cost, as an optimization problem has to be solved for every cell in the space-time grid. In addition, implementation of numerical solvers for these models is hampered by the fact that the optimization problems are only well-defined if the moment vectors stay within the realizable set. For the same reason, further reducing these models by, e.g., reduced-basis methods is not a simple task. Our new method overcomes these disadvantages of classical approaches. The transformation is performed on the semi-discretized level which makes them applicable to a wide range of kinetic schemes and replaces the nonlinear optimization problems by inversion of the positive-definite Hessian matrix. As a result, the new scheme gets rid of the realizability-related problems. Moreover, a discrete entropy law can be enforced by modifying the time stepping scheme. Our numerical experiments demonstrate that our new method is often several times faster than the standard optimization-based scheme.

Fluids ◽  
2021 ◽  
Vol 6 (9) ◽  
pp. 323
Author(s):  
Caelan Lapointe ◽  
Nicholas T. Wimer ◽  
Sam Simons-Wellin ◽  
Jeffrey F. Glusman ◽  
Gregory B. Rieker ◽  
...  

Fires are complex multi-physics problems that span wide spatial scale ranges. Capturing this complexity in computationally affordable numerical simulations for process studies and “outer-loop” techniques (e.g., optimization and uncertainty quantification) is a fundamental challenge in reacting flow research. Further complications arise for propagating fires where a priori knowledge of the fire spread rate and direction is typically not available. In such cases, static mesh refinement at all possible fire locations is a computationally inefficient approach to bridging the wide range of spatial scales relevant to fire behavior. In the present study, we address this challenge by incorporating adaptive mesh refinement (AMR) in fireFoam, an OpenFOAM solver for simulations of complex fire phenomena involving pyrolyzing solid surfaces. The AMR functionality in the extended solver, called fireDyMFoam, is load balanced, models gas, solid, and liquid phases, and allows us to dynamically track regions of interest, thus avoiding inefficient over-resolution of areas far from a propagating flame. We demonstrate the AMR capability and computational efficiency for fire spread on vertical panels, showing that the AMR solver reproduces results obtained using much larger statically refined meshes, but at a substantially reduced computational cost. We then leverage AMR in an optimization framework for fire suppression based on the open-source Dakota toolkit, which is made more computationally tractable through the use of fireDyMFoam, minimizing a cost function that balances water use and solid-phase mass loss. The extension of fireFoam developed here thus enables the use of higher fidelity simulations in optimization problems for the suppression of fire spread in both built and natural environments.


Author(s):  
C. I. Papadopoulos ◽  
E. E. Efstathiou ◽  
P. G. Nikolakopoulos ◽  
L. Kaiktsis

The paper presents an optimization study of the geometry of three-dimensional micro-thrust bearings, in a wide range of convergence ratios. The optimization goal is the maximization of the bearing load carrying capacity. The bearings are modeled as microchannels, consisting of a smooth moving wall (rotor), and a stationary wall (stator) with partial periodic rectangular texturing. The flow field is calculated from the numerical solution of the Navier-Stokes equations for incompressible isothermal flow; processing of the results yields the bearing load capacity and friction coefficient. The geometry of the textured channel is defined parametrically for several width-to-length ratios. Optimal texturing geometries are obtained by utilizing an optimization tool based on genetic algorithms, which is coupled to the CFD code. Here, the design variables define the bearing geometry and convergence ratio. To minimize the computational cost, a multi-objective approach is proposed, consisting in the simultaneous maximization of the load carrying capacity and minimization of the bearing convergence ratio. The optimal solutions, identified based on the concept of Pareto dominance, are equivalent to those of single-objective optimization problems at different convergence ratio values. The present results demonstrate that the characteristics of the optimal texturing patterns depend strongly on both the convergence ratio and the width-to-length ratio. Further, the optimal load carrying capacity increases at increasing convergence ratio, up to an optimal value, identified by the optimization procedure. Finally, proper surface texturing provides substantial load carrying capacity even for parallel or slightly diverging bearings. Based on the present results, we propose simple formulas for the design of textured micro-thrust bearings.


2013 ◽  
Vol 135 (8) ◽  
Author(s):  
Hu Wang ◽  
Enying Li ◽  
Guangyao Li

The combined approximations (CA) method is an effective reanalysis approach providing high quality results. The CA method is suitable for a wide range of structural optimization problems including linear reanalysis, nonlinear reanalysis and eigenvalue reanalysis. However, with increasing complexity and scale of engineering problems, the efficiency of the CA method might not be guaranteed. A major bottleneck of the CA is how to obtain reduced basis vectors efficiently. Therefore, a modified CA method, based on approximation of the inverse matrix, is suggested. Based on the symmetric successive over-relaxation (SSOR) and compressed sparse row (CSR), the efficiency of CA method is shown to be much improved and corresponding storage space markedly reduced. In order to further improve the efficiency, the suggested strategy is implemented on a graphic processing unit (GPU) platform. To verify the performance of the suggested method, several case studies are undertaken. Compared with the popular serial CA method, the results demonstrate that the suggested GPU-based CA method is an order of magnitude faster for the same level of accuracy.


2020 ◽  
pp. 431-449
Author(s):  
Oleg V. Shekatunov ◽  
Konstantin G. Malykhin

The article is devoted to the specifics of studying the industrial labour force of Russia in the 1920s - 1930s in Russian historiography. The various stages of study from the 1920s through the 1930s and up to the last years are concerned. The relevance of the study is due to several factors. These include contradictions in the assessments of Bolshevik modernization of the 1920s and 1930s; projected labour force shortages in modern Russia; as well as the existing labour force shortage in industry at the moment. This determines the relevance of studying the historical period, which was characterized by the most acute personnel problems in the country. The novelty of the study is due to the fact that in modern Russian historiography there is no holistic, integrated view of the problems of the labour force potential formation of Russian industry in the 1920s and 1930s. It is noted that there is no research aimed at analyzing the historiography of these problems. The main stages of the study of industrial labour force are highlighted. The analysis of scientific works correlated with each stage of the study of the topic is performed. The problems and methodology of each stage are considered. A review of a wide range of scientific papers both articles and thesis is presented.


2019 ◽  
Vol 26 (23) ◽  
pp. 4403-4434 ◽  
Author(s):  
Susimaire Pedersoli Mantoani ◽  
Peterson de Andrade ◽  
Talita Perez Cantuaria Chierrito ◽  
Andreza Silva Figueredo ◽  
Ivone Carvalho

Neglected Diseases (NDs) affect million of people, especially the poorest population around the world. Several efforts to an effective treatment have proved insufficient at the moment. In this context, triazole derivatives have shown great relevance in medicinal chemistry due to a wide range of biological activities. This review aims to describe some of the most relevant and recent research focused on 1,2,3- and 1,2,4-triazolebased molecules targeting four expressive NDs: Chagas disease, Malaria, Tuberculosis and Leishmaniasis.


2020 ◽  
Vol 28 (3) ◽  
pp. 147-160
Author(s):  
Andrea Bonito ◽  
Diane Guignard ◽  
Ashley R. Zhang

AbstractWe consider the numerical approximation of the spectral fractional diffusion problem based on the so called Balakrishnan representation. The latter consists of an improper integral approximated via quadratures. At each quadrature point, a reaction–diffusion problem must be approximated and is the method bottle neck. In this work, we propose to reduce the computational cost using a reduced basis strategy allowing for a fast evaluation of the reaction–diffusion problems. The reduced basis does not depend on the fractional power s for 0 < smin ⩽ s ⩽ smax < 1. It is built offline once for all and used online irrespectively of the fractional power. We analyze the reduced basis strategy and show its exponential convergence. The analytical results are illustrated with insightful numerical experiments.


2020 ◽  
Vol 11 (1) ◽  
pp. 241
Author(s):  
Juliane Kuhl ◽  
Andreas Ding ◽  
Ngoc Tuan Ngo ◽  
Andres Braschkat ◽  
Jens Fiehler ◽  
...  

Personalized medical devices adapted to the anatomy of the individual promise greater treatment success for patients, thus increasing the individual value of the product. In order to cater to individual adaptations, however, medical device companies need to be able to handle a wide range of internal processes and components. These are here referred to collectively as the personalization workload. Consequently, support is required in order to evaluate how best to target product personalization. Since the approaches presented in the literature are not able to sufficiently meet this demand, this paper introduces a new method that can be used to define an appropriate variety level for a product family taking into account standardized, variant, and personalized attributes. The new method enables the identification and evaluation of personalizable attributes within an existing product family. The method is based on established steps and tools from the field of variant-oriented product design, and is applied using a flow diverter—an implant for the treatment of aneurysm diseases—as an example product. The personalization relevance and adaptation workload for the product characteristics that constitute the differentiating product properties were analyzed and compared in order to determine a tradeoff between customer value and personalization workload. This will consequently help companies to employ targeted, deliberate personalization when designing their product families by enabling them to factor variety-induced complexity and customer value into their thinking at an early stage, thus allowing them to critically evaluate a personalization project.


2021 ◽  
pp. 002224372110329
Author(s):  
Nicolas Padilla ◽  
Eva Ascarza

The success of Customer Relationship Management (CRM) programs ultimately depends on the firm's ability to identify and leverage differences across customers — a very diffcult task when firms attempt to manage new customers, for whom only the first purchase has been observed. For those customers, the lack of repeated observations poses a structural challenge to inferring unobserved differences across them. This is what we call the “cold start” problem of CRM, whereby companies have difficulties leveraging existing data when they attempt to make inferences about customers at the beginning of their relationship. We propose a solution to the cold start problem by developing a probabilistic machine learning modeling framework that leverages the information collected at the moment of acquisition. The main aspect of the model is that it exibly captures latent dimensions that govern the behaviors observed at acquisition as well as future propensities to buy and to respond to marketing actions using deep exponential families. The model can be integrated with a variety of demand specifications and is exible enough to capture a wide range of heterogeneity structures. We validate our approach in a retail context and empirically demonstrate the model's ability at identifying high-value customers as well as those most sensitive to marketing actions, right after their first purchase.


Author(s):  
Tarun Gangwar ◽  
Dominik Schillinger

AbstractWe present a concurrent material and structure optimization framework for multiphase hierarchical systems that relies on homogenization estimates based on continuum micromechanics to account for material behavior across many different length scales. We show that the analytical nature of these estimates enables material optimization via a series of inexpensive “discretization-free” constraint optimization problems whose computational cost is independent of the number of hierarchical scales involved. To illustrate the strength of this unique property, we define new benchmark tests with several material scales that for the first time become computationally feasible via our framework. We also outline its potential in engineering applications by reproducing self-optimizing mechanisms in the natural hierarchical system of bamboo culm tissue.


2014 ◽  
Vol 1 (4) ◽  
pp. 256-265 ◽  
Author(s):  
Hong Seok Park ◽  
Trung Thanh Nguyen

Abstract Energy efficiency is an essential consideration in sustainable manufacturing. This study presents the car fender-based injection molding process optimization that aims to resolve the trade-off between energy consumption and product quality at the same time in which process parameters are optimized variables. The process is specially optimized by applying response surface methodology and using nondominated sorting genetic algorithm II (NSGA II) in order to resolve multi-object optimization problems. To reduce computational cost and time in the problem-solving procedure, the combination of CAE-integration tools is employed. Based on the Pareto diagram, an appropriate solution is derived out to obtain optimal parameters. The optimization results show that the proposed approach can help effectively engineers in identifying optimal process parameters and achieving competitive advantages of energy consumption and product quality. In addition, the engineering analysis that can be employed to conduct holistic optimization of the injection molding process in order to increase energy efficiency and product quality was also mentioned in this paper.


Sign in / Sign up

Export Citation Format

Share Document