physical models
Recently Published Documents


TOTAL DOCUMENTS

3058
(FIVE YEARS 902)

H-INDEX

71
(FIVE YEARS 12)

2022 ◽  
Vol 9 (2) ◽  
pp. 55-62
Author(s):  
Rahman et al. ◽  

With the advent of medical technology and science, the number of animals used in research has increased. For decades, the use of animals in research and product testing has been a point of conflict. Experts and pharmaceutical manufacturers are harming animals worldwide during laboratory research. Animals have also played a significant role in the advancement of science; animal testing has enabled the discovery of various novel drugs. The misery, suffering, and deaths of animals are not worth the potential human benefits. As a result, animals must not be exploited in research to assess the drug mechanism of action (MOA). Apart from the ethical concern, animal testing has a few more downsides, including the requirement for skilled labor, lengthy processes, and cost. Because it is critical to investigate adverse effects and toxicities in the development of potentially viable drugs. Assessment of each target will consume the range of resources as well as disturb living nature. As the digital twin works in an autonomous virtual world without influencing the physical structure and biological system. Our proposed framework suggests that the digital twin is a great reliable model of the physical system that will be beneficial in assessing the possible MOA prior to time without harming animals. The study describes the creation of a digital twin to combine the information and knowledge obtained by studying the different drug targets and diseases. Mechanism of Action using Digital twin (MOA-DT) will enable the experts to use an innovative approach without physical testing to save animals, time, and resources. DT reflects and simulates the actual drug and its relationships with its target, however presenting a more accurate depiction of the drug, which leads to maximize efficacy and decrease the toxicity of a drug. In conclusion, it has been shown that drug discovery and development can be safe, effective, and economical in no time through the combination of the digital and physical models of a pharmaceutical as compared to experimental animals.


Author(s):  
Pia Domschke ◽  
Oliver Kolb ◽  
Jens Lang

AbstractWe are concerned with the simulation and optimization of large-scale gas pipeline systems in an error-controlled environment. The gas flow dynamics is locally approximated by sufficiently accurate physical models taken from a hierarchy of decreasing complexity and varying over time. Feasible work regions of compressor stations consisting of several turbo compressors are included by semiconvex approximations of aggregated characteristic fields. A discrete adjoint approach within a first-discretize-then-optimize strategy is proposed and a sequential quadratic programming with an active set strategy is applied to solve the nonlinear constrained optimization problems resulting from a validation of nominations. The method proposed here accelerates the computation of near-term forecasts of sudden changes in the gas management and allows for an economic control of intra-day gas flow schedules in large networks. Case studies for real gas pipeline systems show the remarkable performance of the new method.


2022 ◽  
Author(s):  
Yifan Li ◽  
Yongyong Xiang ◽  
Baisong Pan ◽  
Luojie Shi

Abstract Accurate cutting tool remaining useful life (RUL) prediction is of significance to guarantee the cutting quality and minimize the production cost. Recently, physics-based and data-driven methods have been widely used in the tool RUL prediction. The physics-based approaches may not accurately describe the time-varying wear process due to a lack of knowledge for underlying physics and simplifications involved in physical models, while the data-driven methods may be easily affected by the quantity and quality of data. To overcome the drawbacks of these two approaches, a hybrid prognostics framework considering tool wear state is developed to achieve an accurate prediction. Firstly, the mapping relationship between the sensor signal and tool wear is established by support vector regression (SVR). Then, the tool wear statuses are recognized by support vector machine (SVM) and the results are put into a Bayesian framework as prior information. Thirdly, based on the constructed Bayesian framework, parameters of the tool wear model are updated iteratively by the sliding time window and particle filter algorithm. Finally, the tool wear state space and RUL can be predicted accordingly using the updating tool wear model. The validity of the proposed method is demonstrated by a high-speed machine tool experiment. The results show that the presented approach can effectively reduce the uncertainty of tool wear state estimation and improve the accuracy of RUL prediction.


Author(s):  
Maxim Ziatdinov ◽  
Ayana Ghosh ◽  
Sergei V Kalinin

Abstract Both experimental and computational methods for the exploration of structure, functionality, and properties of materials often necessitate the search across broad parameter spaces to discover optimal experimental conditions and regions of interest in the image space or parameter space of computational models. The direct grid search of the parameter space tends to be extremely time-consuming, leading to the development of strategies balancing exploration of unknown parameter spaces and exploitation towards required performance metrics. However, classical Bayesian optimization strategies based on the Gaussian process (GP) do not readily allow for the incorporation of the known physical behaviors or past knowledge. Here we explore a hybrid optimization/exploration algorithm created by augmenting the standard GP with a structured probabilistic model of the expected system’s behavior. This approach balances the flexibility of the non-parametric GP approach with a rigid structure of physical knowledge encoded into the parametric model. The fully Bayesian treatment of the latter allows additional control over the optimization via the selection of priors for the model parameters. The method is demonstrated for a noisy version of the classical objective function used to evaluate optimization algorithms and further extended to physical lattice models. This methodology is expected to be universally suitable for injecting prior knowledge in the form of physical models and past data in the Bayesian optimization framework.


eLife ◽  
2022 ◽  
Vol 11 ◽  
Author(s):  
Catherine Stark ◽  
Teanna Bautista-Leung ◽  
Joanna Siegfried ◽  
Daniel Herschlag

Cold temperature is prevalent across the biosphere and slows the rates of chemical reactions. Increased catalysis has been predicted to be a dominant adaptive trait of enzymes to reduced temperature, and this expectation has informed physical models for enzyme catalysis and influenced bioprospecting strategies. To systematically test rate enhancement as an adaptive trait to cold, we paired kinetic constants of 2223 enzyme reactions with their organism’s optimal growth temperature (TGrowth) and analyzed trends of rate constants as a function of TGrowth. These data do not support a general increase in rate enhancement in cold adaptation. In the model enzyme ketosteroid isomerase (KSI), there is prior evidence for temperature adaptation from a change in an active site residue that results in a tradeoff between activity and stability. Nevertheless, we found that little of the rate constant variation for 20 KSI variants was accounted for by TGrowth. In contrast, and consistent with prior expectations, we observed a correlation between stability and TGrowth across 433 proteins. These results suggest that temperature exerts a weaker selection pressure on enzyme rate constants than stability and that evolutionary forces other than temperature are responsible for the majority of enzymatic rate constant variation.


2022 ◽  
Vol 15 (1) ◽  
pp. 173-197
Author(s):  
Manuel C. Almeida ◽  
Yurii Shevchuk ◽  
Georgiy Kirillin ◽  
Pedro M. M. Soares ◽  
Rita M. Cardoso ◽  
...  

Abstract. The complexity of the state-of-the-art climate models requires high computational resources and imposes rather simplified parameterization of inland waters. The effect of lakes and reservoirs on the local and regional climate is commonly parameterized in regional or global climate modeling as a function of surface water temperature estimated by atmosphere-coupled one-dimensional lake models. The latter typically neglect one of the major transport mechanisms specific to artificial reservoirs: heat and mass advection due to inflows and outflows. Incorporation of these essentially two-dimensional processes into lake parameterizations requires a trade-off between computational efficiency and physical soundness, which is addressed in this study. We evaluated the performance of the two most used lake parameterization schemes and a machine-learning approach on high-resolution historical water temperature records from 24 reservoirs. Simulations were also performed at both variable and constant water level to explore the thermal structure differences between lakes and reservoirs. Our results highlight the need to include anthropogenic inflow and outflow controls in regional and global climate models. Our findings also highlight the efficiency of the machine-learning approach, which may overperform process-based physical models in both accuracy and computational requirements if applied to reservoirs with long-term observations available. Overall, results suggest that the combined use of process-based physical models and machine-learning models will considerably improve the modeling of air–lake heat and moisture fluxes. A relationship between mean water retention times and the importance of inflows and outflows is established: reservoirs with a retention time shorter than ∼ 100 d, if simulated without inflow and outflow effects, tend to exhibit a statistically significant deviation in the computed surface temperatures regardless of their morphological characteristics.


Author(s):  
Gemma Richardson ◽  
Alan W P Thomson

Probabilistic Hazard Assessment (PHA) provides an appropriate methodology for assessing space weather hazard and its impact on technology. PHA is widely used in the geosciences to determine the probability of exceedance of critical thresholds, caused by one or more hazard sources. PHA has proved useful where there are limited historical data to estimate the likelihood of specific impacts. PHA has also driven the development of empirical and physical models, or ensembles of models, to replace measured data. Here we aim to highlight the PHA method to the space weather community and provide an example of it could be used. In terms of space weather impact, the critical hazard thresholds might include the Geomagnetically Induced Current in a specific high voltage power transformer neutral, or the local pipe-to-soil potential in a particular metal pipe. We illustrate PHA in the space weather context by applying it to a twelve-year dataset of Earth-directed solar Coronal Mass Ejections (CME), which we relate to the probability that the global three-hourly geomagnetic activity index K p exceeds specific thresholds. We call this a ‘Probabilistic Geomagnetic Hazard Assessment’, or PGHA. This provides a simple but concrete example of the method. We find that the cumulative probability of K p > 6-, > 7-, > 8- and K p = 9o is 0.359, 0.227, 0.090, 0.011, respectively, following observation of an Earth-directed CME, summed over all CME launch speeds and solar source locations. This represents an order of magnitude increase in the a priori probability of exceeding these thresholds, according to the historical K p distribution. For the lower Kp thresholds, the results are distorted somewhat by our exclusion of coronal hole high speed stream effects. The PGHA also reveals useful (for operational forecasters) probabilistic associations between solar source location and subsequent maximum Kp .


2022 ◽  
Author(s):  
Hessam Djavaherpour ◽  
Ali Mahdavi-Amiri ◽  
Faramarz Samavati

Geospatial datasets are too complex to easily visualize and understand on a computer screen. Combining digital fabrication with a discrete global grid system (DGGS) can produce physical models of the Earth for visualizing multiresolution geospatial datasets. This proposed approach includes a mechanism for attaching a set of 3D printed segments to produce a scalable model of the Earth. The authors have produced two models that support the attachment of different datasets both in 2D and 3D format.


Sign in / Sign up

Export Citation Format

Share Document