computational expense
Recently Published Documents


TOTAL DOCUMENTS

51
(FIVE YEARS 14)

H-INDEX

11
(FIVE YEARS 0)

2022 ◽  
Author(s):  
Sebastian Harry Reid Rosier ◽  
Christopher Bull ◽  
G. Hilmar Gudmundsson

Abstract. Through their role in buttressing upstream ice flow, Antarctic ice shelves play an important part in regulating future sea level change. Reduction in ice-shelf buttressing caused by increased ocean-induced melt along their undersides is now understood to be one of the key drivers of ice loss from the Antarctic Ice Sheet. However, despite the importance of this forcing mechanism most ice-sheet simulations currently rely on simple melt-parametrisations of this ocean-driven process, since a fully coupled ice-ocean modelling framework is prohibitively computationally expensive. Here, we provide an alternative approach that is able to capture the greatly improved physical description of this process provided by large-scale ocean-circulation models over currently employed melt-parameterisations but with trivial computational expense. We introduce a new approach that brings together deep learning and physical modelling to develop a deep neural network framework, MELTNET, that can emulate ocean model predictions of sub-ice shelf melt rates. We train MELTNET on synthetic geometries, using the NEMO ocean model as a ground-truth in lieu of observations to provide melt rates both for training and to evaluate the performance of the trained network. We show that MELTNET can accurately predict melt rates for a wide range of complex synthetic geometries and outperforms more traditional parameterisations for > 95 % of geometries tested. Furthermore, we find MELTNET's melt rate estimates show sensitivity to established physical relationships such as a changes in thermal forcing and ice shelf slope. This study demonstrates the potential for a deep learning framework to calculate melt rates with almost no computational expense, that could in the future be used in conjunction with an ice sheet model to provide predictions for large-scale ice sheet models.


Erkenntnis ◽  
2021 ◽  
Author(s):  
Helen Meskhidze

AbstractThe increasing precision of observations of the large-scale structure of the universe has created a problem for simulators: running the simulations necessary to interpret these observations has become impractical. Simulators have thus turned to machine learning (ML) algorithms instead. Though ML decreases computational expense, one might be worried about the use of ML for scientific investigations: How can algorithms that have repeatedly been described as black-boxes deliver scientific understanding? In this paper, I investigate how cosmologists employ ML, arguing that in this context, ML algorithms should not be considered black-boxes and can deliver genuine scientific understanding. Accordingly, understanding the methodological role of ML algorithms is crucial to understanding the types of questions they are capable of, and ought to be responsible for, answering.


2021 ◽  
pp. 0272989X2110163
Author(s):  
Tiago M. de Carvalho ◽  
Joost van Rosmalen ◽  
Harold B. Wolff ◽  
Hendrik Koffijberg ◽  
Veerle M. H. Coupé

Background Metamodeling may substantially reduce the computational expense of individual-level state transition simulation models (IL-STM) for calibration, uncertainty quantification, and health policy evaluation. However, because of the lack of guidance and readily available computer code, metamodels are still not widely used in health economics and public health. In this study, we provide guidance on how to choose a metamodel for uncertainty quantification. Methods We built a simulation study to evaluate the prediction accuracy and computational expense of metamodels for uncertainty quantification using life-years gained (LYG) by treatment as the IL-STM outcome. We analyzed how metamodel accuracy changes with the characteristics of the simulation model using a linear model (LM), Gaussian process regression (GP), generalized additive models (GAMs), and artificial neural networks (ANNs). Finally, we tested these metamodels in a case study consisting of a probabilistic analysis of a lung cancer IL-STM. Results In a scenario with low uncertainty in model parameters (i.e., small confidence interval), sufficient numbers of simulated life histories, and simulation model runs, commonly used metamodels (LM, ANNs, GAMs, and GP) have similar, good accuracy, with errors smaller than 1% for predicting LYG. With a higher level of uncertainty in model parameters, the prediction accuracy of GP and ANN is superior to LM. In the case study, we found that in the worst case, the best metamodel had an error of about 2.1%. Conclusion To obtain good prediction accuracy, in an efficient way, we recommend starting with LM, and if the resulting accuracy is insufficient, we recommend trying ANNs and eventually also GP regression.


2021 ◽  
Vol 7 (5) ◽  
pp. 77
Author(s):  
Wesley T. Honeycutt ◽  
Eli S. Bridge

Few object detection methods exist which can resolve small objects (<20 pixels) from complex static backgrounds without significant computational expense. A framework capable of meeting these needs which reverses the steps in classic edge detection methods using the Canny filter for edge detection is presented here. Sample images taken from sequential frames of video footage were processed by subtraction, thresholding, Sobel edge detection, Gaussian blurring, and Zhang–Suen edge thinning to identify objects which have moved between the two frames. The results of this method show distinct contours applicable to object tracking algorithms with minimal “false positive” noise. This framework may be used with other edge detection methods to produce robust, low-overhead object tracking methods.


2021 ◽  
Author(s):  
Everett Snieder ◽  
Usman Khan

&lt;p&gt;Semi-distributed rainfall runoff models are widely used in hydrology, offering a compromise between the computational efficiency of lumped models and the representation of spatial heterogeneity offered by fully distributed models. In semi-distribute models, the catchment is divided into subcatchments, which are used as the basis for aggregating spatial characteristics. During model development, uncertainty is usually estimated from literature, however, subcatchment uncertainty is closely related to subcatchment size and level of spatial heterogeneity. Currently, there is no widely accepted systematic method for determining subcatchment size. Typically, subcatchment discretisation is a function of the spatiotemporal resolution of the available data. In our research, we evaluate the relationship between lumped parameter uncertainty and subcatchment size. Models with small subcatchments are expected to have low spatial uncertainty, as the spatial heterogeneity per subcatchment is also low. As subcatchment size increases, as does spatial uncertainty. Our objectives are to study the trade-off between subcatchment size, parameter uncertainty, and computational expense, to outline a systematic and precise framework for subcatchment discretisation. A proof of concept is presented using the Stormwater Management Model (EPA-SWMM) platform, to study a semi-urban catchment in Southwestern Ontario, Canada. Automated model creation is used to create catchment models with varying subcatchment sizes. For each model variation, uncertainty is estimated using spatial statistical bootstrapping. Applying bootstrapping to the spatial parameters directly provides a model free method for calculating the uncertainty of sample estimates. A Monte Carlo simulation is used to propagate uncertainty through the model and spatial resolution is assessed using performance criteria including the percentage of observations captured by the uncertainty envelope, the mean uncertainty envelope width, and rank histograms. The computational expense of simulations is tracked across the varying spatial resolution, achieved through subcatchment discretisation. Initial results suggest that uncertainty estimates often disagree with typical values listed in literature and vary significantly with respect to subcatchment size; this has significant implications on model calibration.&lt;/p&gt;


2021 ◽  
Author(s):  
Andrew Curtis ◽  
Xuebin Zhao ◽  
Xin Zhang

&lt;p&gt;The ultimate goal of a geophysical investigation is usually to find answers to scientific (often low-dimensional) questions: how large is a subsurface body? How deeply does lithosphere subduct? Does a certain subsurface feature exist? Background research reviews existing information, an experiment is designed and performed to acquire new data, and the most likely answer is estimated. Typically the answer is interpreted from geophysical inversions, but is usually biased because only one particular forward function (model-data relationship) is considered, one inversion method is used, and because human interpretation is a biased process. &lt;strong&gt;&lt;em&gt;Interrogation theory &lt;/em&gt;&lt;/strong&gt;provides a systematic way to answer specific questions. Answers balance information from multiple forward models, inverse methods and model parametrizations probabilistically, and optimal answers are found using decision theory.&lt;/p&gt;&lt;p&gt;Two examples illustrate interrogation of the Earth&amp;#8217;s subsurface. In a synthetic test, we estimate the cross-sectional area of a subsurface low velocity anomaly by interrogating Bayesian probabilistic tomographic maps. By combining the results of four different nonlinear inversion algorithms, the optimal answer is very close to the true answer. In a field data application, we evaluate the extent of the Irish Sea Sedimentary Basin based on the uncertainties in velocity structure derived from Love wave tomography. This example shows that the computational expense of estimating uncertainties adds explicit value to answers.&lt;/p&gt;&lt;p&gt;This study demonstrates that interrogation theory answers realistic questions about the Earth&amp;#8217;s subsurface. The same theory can be used to solve different types of scientific problem - experimental design, interpreting models, expert elicitation and risk estimation - and can be applied in any field of science. One of its most important contributions is to show that fully nonlinear estimates of uncertainty are critical for decision-making in real-world geoscientific problems, potentially justifying their computational expense.&lt;/p&gt;&lt;p&gt;&amp;#160;&lt;/p&gt;


Author(s):  
Ryan Murphy ◽  
Chikwesiri Imediegwu ◽  
Robert Hewson ◽  
Matthew Santer

AbstractA robust three-dimensional multiscale structural optimization framework with concurrent coupling between scales is presented. Concurrent coupling ensures that only the microscale data required to evaluate the macroscale model during each iteration of optimization is collected and results in considerable computational savings. This represents the principal novelty of this framework and permits a previously intractable number of design variables to be used in the parametrization of the microscale geometry, which in turn enables accessibility to a greater range of extremal point properties during optimization. Additionally, the microscale data collected during optimization is stored in a reusable database, further reducing the computational expense of optimization. Application of this methodology enables structures with precise functionally graded mechanical properties over two scales to be derived, which satisfy one or multiple functional objectives. Two classical compliance minimization problems are solved within this paper and benchmarked against a Solid Isotropic Material with Penalization (SIMP)–based topology optimization. Only a small fraction of the microstructure database is required to derive the optimized multiscale solutions, which demonstrates a significant reduction in the computational expense of optimization in comparison to contemporary sequential frameworks. In addition, both cases demonstrate a significant reduction in the compliance functional in comparison to the equivalent SIMP-based optimizations.


2020 ◽  
Vol 10 (1) ◽  
pp. 431-443
Author(s):  
Rachna Jain ◽  
Anand Nayyar

AbstractIn the distributed computing worldview, the client and association’s information is put away remotely on the cloud server. Clients and associations can get to applications, administrations, and foundation on-request from a cloud server through the internet, withstanding the various advantages, numerous difficulties, and issues that endure verifying cloud information access and capacity. These difficulties have featured additional security and protection issues as cloud specialist co-ops are exclusively in charge of the capacity and handling of the association’s information out of its physical limits. Hence, a robust security plan is required in order to ensure the association’s touchy information emerges to keep the information shielded and distant from programmers. Over the globe, specialists have proposed fluctuated security structures having an alternate arrangement of security standards with changing computational expense. Down to earth usage of these structures with low calculation cost remains an extreme test to tackle, as security standards have not been characterized.Methodology – To verify the cloud and deal with all security standards, we propose a REGISTRATION AUTHENTICATION STORAGE DATA ACCESS (RASD) structure for giving security to authoritative information put away on cloud catalogs utilizing a novel security plot, for example, HEETPS. A RASD system involves a stage by stage process-Enlistment of clients, Authentication of the client secret key, and Capacity of information just as information access on cloud registry. When the system is connected to cloud servers, all the delicate information put away on the cloud will end up being accessible just to verified clients. The essential favourable position of the proposed RASD structure is its simple usage, high security, and overall less computational expense.Moreover, we propose a homomorphic-private-practical uniformity testing-based plan structured under a schematic calculation Che Aet DPs. This calculation executes homomorphic encryption with subtractive fairness testing, notwithstanding low computational intricacy. To test the security ability, we tried the proposed RASD system with other existing conventions like Privacy-protecting examining convention, group reviewing convention, verified system coding convention, and encoded information preparing with homomorphic re-encryption convention. Findings – Experimentation-based outcomes demonstrated that the RASD structure not only gives a high-security layer for delicate information but also enables a decrease in computational expense and performs better when compared with existing conventions for distributed computing.


A significant initial step for video investigation is Background Subtraction and it is utilized to find the objects of enthusiasm for additional prerequisites. Foundation deduction approach is a general technique for movement recognition strategy, which proficiently utilizes the distinction of the current picture and the foundation picture to recognize moving articles. Here the proposed calculation is known as Mixture of Gaussian (MOG) process. This goes under a quality investigation calculation for pictures, which could be handled in the recordings and casings. A methodology is utilized alongside the Kalman channel for outline by outline identification. At that point the MOG is utilized naturally to gauges the quantity of blend parts required to display the pixels foundation shading dissemination. Here executes the foundation concealment for static and dynamic foundation pictures without utilizing any reference foundation pictures, and furthermore smother the clamor out of sight picture's shadows. Kalman channel is a channel that contains strategies portrayed by inferior computational expense and depends on a strong factual model, on a heartiness level. At long last, the fragmented foundation picture is acquired with acceptable execution. At that point the key of this technique is the instatement and update of foundation picture and recognition of moving article, which is likewise exact.


2020 ◽  
Author(s):  
Louis Israel

Molecule stream (PF) is a technique initially proposed for single objective following, and utilized as of late to address the weight decline issue of the consecutive Monte Carlo likelihood speculation thickness (SMC-PHD) channel for various media (AV) multi-speaker following, where the molecule stream is determined by just the estimations close to the molecule, accepting that the objective is identified, as in an ongoing standard dependent on non-zero molecule stream for example the AV-NPF-SMC-PHD channel. This, be that as it may, can be tricky when impediment occurs and the blocked speaker may not be distinguished. To address this issue, we propose another technique where the marks of the particles are evaluated utilizing the probability work, and the molecule stream is determined as far as the chose particles with similar names. Subsequently, the particles related with identified speakers and undetected speakers are recognized dependent on the molecule names. With this novel strategy, named as AV-LPF-SMC-PHD, the speaker states can be evaluated as the weighted mean of the named particles, which is computationally more effective than utilizing a bunching technique as in the AV-NPF-SMC-PHD channel. The proposed calculation is contrasted efficiently and a few standard following techniques utilizing the AV16.3, AVDIAR and CLEAR datasets, and are appeared to offer improved following precision with a low computational expense.


Sign in / Sign up

Export Citation Format

Share Document