computational performance
Recently Published Documents


TOTAL DOCUMENTS

810
(FIVE YEARS 392)

H-INDEX

25
(FIVE YEARS 8)

Author(s):  
Jannik Burre ◽  
Dominik Bongartz ◽  
Alexander Mitsos

AbstractSuperstructure optimization is a powerful but computationally demanding task that can be used to select the optimal structure among many alternatives within a single optimization. In chemical engineering, such problems naturally arise in process design, where different process alternatives need to be considered simultaneously to minimize a specific objective function (e.g., production costs or global warming impact). Conventionally, superstructure optimization problems are either formulated with the Big-M or the Convex Hull reformulation approach. However, for problems containing nonconvex functions, it is not clear whether these yield the most computationally efficient formulations. We therefore compare the conventional problem formulations with less common ones (using equilibrium constraints, step functions, or multiplications of binary and continuous variables to model disjunctions) using three case studies. First, a minimalist superstructure optimization problem is used to derive conjectures about their computational performance. These conjectures are then further investigated by two more complex literature benchmarks. Our analysis shows that the less common approaches tend to result in a smaller problem size, while keeping relaxations comparably tight—despite the introduction of additional nonconvexities. For the considered case studies, we demonstrate that all reformulation approaches can further benefit from eliminating optimization variables by a reduced-space formulation. For superstructure optimization problems containing nonconvex functions, we therefore encourage to also consider problem formulations that introduce additional nonconvexities but reduce the number of optimization variables.


Cells ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 239
Author(s):  
Sonja Langthaler ◽  
Jasmina Lozanović Šajić ◽  
Theresa Rienmüller ◽  
Seth H. Weinberg ◽  
Christian Baumgartner

The mathematical modeling of ion channel kinetics is an important tool for studying the electrophysiological mechanisms of the nerves, heart, or cancer, from a single cell to an organ. Common approaches use either a Hodgkin–Huxley (HH) or a hidden Markov model (HMM) description, depending on the level of detail of the functionality and structural changes of the underlying channel gating, and taking into account the computational effort for model simulations. Here, we introduce for the first time a novel system theory-based approach for ion channel modeling based on the concept of transfer function characterization, without a priori knowledge of the biological system, using patch clamp measurements. Using the shaker-related voltage-gated potassium channel Kv1.1 (KCNA1) as an example, we compare the established approaches, HH and HMM, with the system theory-based concept in terms of model accuracy, computational effort, the degree of electrophysiological interpretability, and methodological limitations. This highly data-driven modeling concept offers a new opportunity for the phenomenological kinetic modeling of ion channels, exhibiting exceptional accuracy and computational efficiency compared to the conventional methods. The method has a high potential to further improve the quality and computational performance of complex cell and organ model simulations, and could provide a valuable new tool in the field of next-generation in silico electrophysiology.


Symmetry ◽  
2022 ◽  
Vol 14 (1) ◽  
pp. 119
Author(s):  
Simone G. Riva ◽  
Paolo Cazzaniga ◽  
Marco S. Nobile ◽  
Simone Spolaor ◽  
Leonardo Rundo ◽  
...  

Several software tools for the simulation and analysis of biochemical reaction networks have been developed in the last decades; however, assessing and comparing their computational performance in executing the typical tasks of computational systems biology can be limited by the lack of a standardized benchmarking approach. To overcome these limitations, we propose here a novel tool, named SMGen, designed to automatically generate synthetic models of reaction networks that, by construction, are characterized by relevant features (e.g., system connectivity and reaction discreteness) and non-trivial emergent dynamics of real biochemical networks. The generation of synthetic models in SMGen is based on the definition of an undirected graph consisting of a single connected component that, generally, results in a computationally demanding task; to speed up the overall process, SMGen exploits a main–worker paradigm. SMGen is also provided with a user-friendly graphical user interface, which allows the user to easily set up all the parameters required to generate a set of synthetic models with any number of reactions and species. We analysed the computational performance of SMGen by generating batches of symmetric and asymmetric reaction-based models (RBMs) of increasing size, showing how a different number of reactions and/or species affects the generation time. Our results show that when the number of reactions is higher than the number of species, SMGen has to identify and correct a large number of errors during the creation process of the RBMs, a circumstance that increases the running time. Still, SMGen can generate synthetic models with hundreds of species and reactions in less than 7 s.


2022 ◽  
Author(s):  
Evangelos Pompodakis

In this manuscript, a novel Δ-circuit approach is proposed, which enables the fast calculation of fault currents in large islanded AC microgrids (MGs), supplied by inverter-based distributed generators (IBDGs) with virtual impedance current limiters (VICLs). The concept of virtual impedance for limiting the fault current of IBDGs has gained the interest of research community in the recent years, due to the strong advantages it offers. Moreover, Δ-circuit is an efficient approach, which has been widely applied in the past, for the calculation of short?circuit currents of transmission and distribution networks. However, the traditional Δ-circuit, in its current form, is not applicable in islanded MGs, due to the particular characteristics of such networks, e.g., the absence of a slack bus. To overcome this issue, a novel Δ-circuit approach is proposed in this paper, with the following distinct features: a) precise simulation of islanded MGs, b) fast computational performance, c) generic applicability in all types of faults e.g., single-line, 2-line or 3-line faults, d) simple extension to other DG current limiting modes, e.g., latched limit strategy etc. The proposed approach is validated through the time-domain software of Matlab Simulink, in a 9-bus and 13-bus islanded MG. The computational performance of the proposed fault analysis method is further tested in a modified islanded version of the IEEE 8500-node network.


2022 ◽  
Author(s):  
Evangelos Pompodakis

In this manuscript, a novel Δ-circuit approach is proposed, which enables the fast calculation of fault currents in large islanded AC microgrids (MGs), supplied by inverter-based distributed generators (IBDGs) with virtual impedance current limiters (VICLs). The concept of virtual impedance for limiting the fault current of IBDGs has gained the interest of research community in the recent years, due to the strong advantages it offers. Moreover, Δ-circuit is an efficient approach, which has been widely applied in the past, for the calculation of short?circuit currents of transmission and distribution networks. However, the traditional Δ-circuit, in its current form, is not applicable in islanded MGs, due to the particular characteristics of such networks, e.g., the absence of a slack bus. To overcome this issue, a novel Δ-circuit approach is proposed in this paper, with the following distinct features: a) precise simulation of islanded MGs, b) fast computational performance, c) generic applicability in all types of faults e.g., single-line, 2-line or 3-line faults, d) simple extension to other DG current limiting modes, e.g., latched limit strategy etc. The proposed approach is validated through the time-domain software of Matlab Simulink, in a 9-bus and 13-bus islanded MG. The computational performance of the proposed fault analysis method is further tested in a modified islanded version of the IEEE 8500-node network.


Sensors ◽  
2022 ◽  
Vol 22 (1) ◽  
pp. 348
Author(s):  
Francisco de Melo ◽  
Horácio C. Neto ◽  
Hugo Plácido da Silva

Biometric identification systems are a fundamental building block of modern security. However, conventional biometric methods cannot easily cope with their intrinsic security liabilities, as they can be affected by environmental factors, can be easily “fooled” by artificial replicas, among other caveats. This has lead researchers to explore other modalities, in particular based on physiological signals. Electrocardiography (ECG) has seen a growing interest, and many ECG-enabled security identification devices have been proposed in recent years, as electrocardiography signals are, in particular, a very appealing solution for today’s demanding security systems—mainly due to the intrinsic aliveness detection advantages. These Electrocardiography (ECG)-enabled devices often need to meet small size, low throughput, and power constraints (e.g., battery-powered), thus needing to be both resource and energy-efficient. However, to date little attention has been given to the computational performance, in particular targeting the deployment with edge processing in limited resource devices. As such, this work proposes an implementation of an Artificial Intelligence (AI)-enabled ECG-based identification embedded system, composed of a RISC-V based System-on-a-Chip (SoC). A Binary Convolutional Neural Network (BCNN) was implemented in our SoC’s hardware accelerator that, when compared to a software implementation of a conventional, non-binarized, Convolutional Neural Network (CNN) version of our network, achieves a 176,270× speedup, arguably outperforming all the current state-of-the-art CNN-based ECG identification methods.


2022 ◽  
pp. 1287-1300
Author(s):  
Balaji Prabhu B. V. ◽  
M. Dakshayini

Demand forecasting plays an important role in the field of agriculture, where a farmer can plan for the crop production according to the demand in future and make a profitable crop business. There exist a various statistical and machine learning methods for forecasting the demand, selecting the best forecasting model is desirable. In this work, a multiple linear regression (MLR) and an artificial neural network (ANN) model have been implemented for forecasting an optimum societal demand for various food crops that are commonly used in day to day life. The models are implemented using R toll, linear model and neuralnet packages for training and optimization of the MLR and ANN models. Then, the results obtained by the ANN were compared with the results obtained with MLR models. The results obtained indicated that the designed models are useful, reliable, and quite an effective tool for optimizing the effects of demand prediction in controlling the supply of food harvests to match the societal needs satisfactorily.


Agronomy ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 97
Author(s):  
Liang Gong ◽  
Chenrui Yu ◽  
Ke Lin ◽  
Chengliang Liu

Powdery mildew is a common crop disease and is one of the main diseases of cucumber in the middle and late stages of growth. Powdery mildew causes the plant leaves to lose their photosynthetic function and reduces crop yield. The segmentation of powdery mildew spot areas on plant leaves is the key to disease detection and severity evaluation. Considering the convenience for identification of powdery mildew in the field environment or for quantitative analysis in the lab, establishing a lightweight model for portable equipment is essential. In this study, the plant-leaf disease-area segmentation model was deliberately designed to make it meet the need for portability, such as deployment in a smartphone or a tablet with a constrained computational performance and memory size. First, we proposed a super-pixel clustering segmentation operation to preprocess the images to reduce the pixel-level computation. Second, in order to enhance the segmentation efficiency by leveraging the a priori knowledge, a Gaussian Mixture Model (GMM) was established to model different kinds of super-pixels in the images, namely the healthy leaf super pixel, the infected leaf super pixel, and the cluttered background. Subsequently, an Expectation–Maximization (EM) algorithm was adopted to optimize the computational efficiency. Third, in order to eliminate the effect of under-segmentation caused by the aforementioned clustering method, pixel-level expansion was used to describe and embody the nature of leaf mildew distribution and therefore improve the segmentation accuracy. Finally, a lightweight powdery-mildew-spot-area-segmentation software was integrated to realize a pixel-level segmentation of powdery mildew spot, and we developed a mobile powdery-mildew-spot-segmentation software that can run in Android devices, providing practitioners with a convenient way to analyze leaf diseases. Experiments show that the model proposed in this paper can easily run on mobile devices, as it occupies only 200 M memory when running. The model takes less than 3 s to run on a smartphone with a Cortex-A9 1.2G processor. Compared to the traditional applications, the proposed method achieves a trade-off among the powdery-mildew-area accuracy estimation, limited instrument resource occupation, and the computational latency, which meets the demand of portable automated phenotyping.


Processes ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 79
Author(s):  
Shahab Golshan ◽  
Bruno Blais

In this research, we investigate the influence of a load-balancing strategy and parametrization on the speed-up of discrete element method simulations using Lethe-DEM. Lethe-DEM is an open-source DEM code which uses a cell-based load-balancing strategy. We compare the computational performance of different cell-weighing strategies based on the number of particles per cell (linear and quadratic). We observe two minimums for particle to cell weights (at 3, 40 for quadratic, and 15, 50 for linear) in both linear and quadratic strategies. The first and second minimums are attributed to the suitable distribution of cell-based and particle-based functions, respectively. We use four benchmark simulations (packing, rotating drum, silo, and V blender) to investigate the computational performances of different load-balancing schemes (namely, single-step, frequent and dynamic). These benchmarks are chosen to demonstrate different scenarios that may occur in a DEM simulation. In a large-scale rotating drum simulation, which shows the systems in which particles occupy a constant region after reaching steady-state, single-step load-balancing shows the best performance. In a silo and V blender, where particles move in one direction or have a reciprocating motion, frequent and dynamic schemes are preferred. We propose an automatic load-balancing scheme (dynamic) that finds the best load-balancing steps according to the imbalance of computational load between the processes. Furthermore, we show the high computational performance of Lethe-DEM in the simulation of the packing of 108 particles on 4800 processes. We show that simulations with optimum load-balancing need ≈40% less time compared to the simulations with no load-balancing.


2021 ◽  
Vol 17 (12) ◽  
pp. e1009295
Author(s):  
Lanxin Zhang ◽  
Junyu Wang ◽  
Max von Kleist

Pre-exposure prophylaxis (PrEP) is an important pillar to prevent HIV transmission. Because of experimental and clinical shortcomings, mathematical models that integrate pharmacological, viral- and host factors are frequently used to quantify clinical efficacy of PrEP. Stochastic simulations of these models provides sample statistics from which the clinical efficacy is approximated. However, many stochastic simulations are needed to reduce the associated sampling error. To remedy the shortcomings of stochastic simulation, we developed a numerical method that allows predicting the efficacy of arbitrary prophylactic regimen directly from a viral dynamics model, without sampling. We apply the method to various hypothetical dolutegravir (DTG) prophylaxis scenarios. The approach is verified against state-of-the-art stochastic simulation. While the method is more accurate than stochastic simulation, it is superior in terms of computational performance. For example, a continuous 6-month prophylactic profile is computed within a few seconds on a laptop computer. The method’s computational performance, therefore, substantially expands the horizon of feasible analysis in the context of PrEP, and possibly other applications.


Sign in / Sign up

Export Citation Format

Share Document