scholarly journals Nesting Monte Carlo for high-dimensional non-linear PDEs

2018 ◽  
Vol 24 (4) ◽  
pp. 225-247 ◽  
Author(s):  
Xavier Warin

Abstract A new method based on nesting Monte Carlo is developed to solve high-dimensional semi-linear PDEs. Depending on the type of non-linearity, different schemes are proposed and theoretically studied: variance error are given and it is shown that the bias of the schemes can be controlled. The limitation of the method is that the maturity or the Lipschitz constants of the non-linearity should not be too high in order to avoid an explosion of the computational time. Many numerical results are given in high dimension for cases where analytical solutions are available or where some solutions can be computed by deep-learning methods.

Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 223
Author(s):  
Yen-Ling Tai ◽  
Shin-Jhe Huang ◽  
Chien-Chang Chen ◽  
Henry Horng-Shing Lu

Nowadays, deep learning methods with high structural complexity and flexibility inevitably lean on the computational capability of the hardware. A platform with high-performance GPUs and large amounts of memory could support neural networks having large numbers of layers and kernels. However, naively pursuing high-cost hardware would probably drag the technical development of deep learning methods. In the article, we thus establish a new preprocessing method to reduce the computational complexity of the neural networks. Inspired by the band theory of solids in physics, we map the image space into a noninteraction physical system isomorphically and then treat image voxels as particle-like clusters. Then, we reconstruct the Fermi–Dirac distribution to be a correction function for the normalization of the voxel intensity and as a filter of insignificant cluster components. The filtered clusters at the circumstance can delineate the morphological heterogeneity of the image voxels. We used the BraTS 2019 datasets and the dimensional fusion U-net for the algorithmic validation, and the proposed Fermi–Dirac correction function exhibited comparable performance to other employed preprocessing methods. By comparing to the conventional z-score normalization function and the Gamma correction function, the proposed algorithm can save at least 38% of computational time cost under a low-cost hardware architecture. Even though the correction function of global histogram equalization has the lowest computational time among the employed correction functions, the proposed Fermi–Dirac correction function exhibits better capabilities of image augmentation and segmentation.


2021 ◽  
Author(s):  
R. Priyadarshini ◽  
K. Anuratha ◽  
N. Rajendran ◽  
S. Sujeetha

Anamoly is an uncommon and it represents an outlier i.e, a nonconforming case. According to Oxford Dictionary of Mathematics anamoly is defined as an unusal and erroneous observation that usually doesn’t follow the general pattern of drawn population. The process of detecting the anmolies is a process of data mining and it aims at finding the data points or patterns that do not adapt with the actual complete pattern of the data.The study on anamoly behavior and its impact has been done on areas such as Network Security, Finance, Healthcare and Earth Sciences etc. The proper detection and prediction of anamolies are of great importance as these rare observations may carry siginificant information. In today’s finanicial world, the enterprise data is digitized and stored in the cloudand so there is a significant need to detect the anaomalies in financial data which will help the enterprises to deal with the huge amount of auditing The corporate and enterprise is conducting auidts on large number of ledgers and journal entries. The monitoring of those kinds of auidts is performed manually most of the times. There should be proper anamoly detection in the high dimensional data published in the ledger format for auditing purpose. This work aims at analyzing and predicting unusal fraudulent financial transations by emplyoing few Machine Learning and Deep Learning Methods. Even if any of the anamoly like manipulation or tampering of data detected, such anamolies and errors can be identified and marked with proper proof with the help of the machine learning based algorithms. The accuracy of the prediction is increased by 7% by implementing the proposed prediction models.


2016 ◽  
Vol 14 (1) ◽  
pp. 64-75
Author(s):  
Zhuoxi Yu ◽  
YuJia Jin ◽  
Milan Parmar ◽  
Limin Wang

In the era of the development in network economy, e-commerce sites' operational efficiency is in relation to the development of enterprises. Thus, how to evaluate e-commerce sites have become a hot topic. Due to the evaluation index of e-commerce sites have the characteristics of high dimension and data inhomogeneity, the new method combines PCA with the improved OPTICS algorithm to classify and evaluate the e-commerce demonstration enterprise websites. Firstly, using PCA to reduce the dimension of high-dimensional data. Secondly, for the limitation of OPTICS algorithm in dealing with sparse points, then using the improved OPTICS algorithm in clustering low-dimensional data to evaluate the effect of e-commerce sites and make suggestions.


Geophysics ◽  
2019 ◽  
Vol 84 (4) ◽  
pp. R583-R599 ◽  
Author(s):  
Fangshu Yang ◽  
Jianwei Ma

Seismic velocity is one of the most important parameters used in seismic exploration. Accurate velocity models are the key prerequisites for reverse time migration and other high-resolution seismic imaging techniques. Such velocity information has traditionally been derived by tomography or full-waveform inversion (FWI), which are time consuming and computationally expensive, and they rely heavily on human interaction and quality control. We have investigated a novel method based on the supervised deep fully convolutional neural network for velocity-model building directly from raw seismograms. Unlike the conventional inversion method based on physical models, supervised deep-learning methods are based on big-data training rather than prior-knowledge assumptions. During the training stage, the network establishes a nonlinear projection from the multishot seismic data to the corresponding velocity models. During the prediction stage, the trained network can be used to estimate the velocity models from the new input seismic data. One key characteristic of the deep-learning method is that it can automatically extract multilayer useful features without the need for human-curated activities and an initial velocity setup. The data-driven method usually requires more time during the training stage, and actual predictions take less time, with only seconds needed. Therefore, the computational time of geophysical inversions, including real-time inversions, can be dramatically reduced once a good generalized network is built. By using numerical experiments on synthetic models, the promising performance of our proposed method is shown in comparison with conventional FWI even when the input data are in more realistic scenarios. We have also evaluated deep-learning methods, the training data set, the lack of low frequencies, and the advantages and disadvantages of our method.


2017 ◽  
Vol 8 (4) ◽  
pp. 379-386 ◽  
Author(s):  
Alexander M. Schoemann ◽  
Aaron J. Boulton ◽  
Stephen D. Short

Mediation analyses abound in social and personality psychology. Current recommendations for assessing power and sample size in mediation models include using a Monte Carlo power analysis simulation and testing the indirect effect with a bootstrapped confidence interval. Unfortunately, these methods have rarely been adopted by researchers due to limited software options and the computational time needed. We propose a new method and convenient tools for determining sample size and power in mediation models. We demonstrate our new method through an easy-to-use application that implements the method. These developments will allow researchers to quickly and easily determine power and sample size for simple and complex mediation models.


Biometrika ◽  
2020 ◽  
Vol 107 (4) ◽  
pp. 1005-1012 ◽  
Author(s):  
Deborshee Sen ◽  
Matthias Sachs ◽  
Jianfeng Lu ◽  
David B Dunson

Summary Classification with high-dimensional data is of widespread interest and often involves dealing with imbalanced data. Bayesian classification approaches are hampered by the fact that current Markov chain Monte Carlo algorithms for posterior computation become inefficient as the number $p$ of predictors or the number $n$ of subjects to classify gets large, because of the increasing computational time per step and worsening mixing rates. One strategy is to employ a gradient-based sampler to improve mixing while using data subsamples to reduce the per-step computational complexity. However, the usual subsampling breaks down when applied to imbalanced data. Instead, we generalize piecewise-deterministic Markov chain Monte Carlo algorithms to include importance-weighted and mini-batch subsampling. These maintain the correct stationary distribution with arbitrarily small subsamples and substantially outperform current competitors. We provide theoretical support for the proposed approach and demonstrate its performance gains in simulated data examples and an application to cancer data.


2019 ◽  
Vol 89 (6) ◽  
pp. 903-909 ◽  
Author(s):  
Ji-Hoon Park ◽  
Hye-Won Hwang ◽  
Jun-Ho Moon ◽  
Youngsung Yu ◽  
Hansuk Kim ◽  
...  

ABSTRACT Objective: To compare the accuracy and computational efficiency of two of the latest deep-learning algorithms for automatic identification of cephalometric landmarks. Materials and Methods: A total of 1028 cephalometric radiographic images were selected as learning data that trained You-Only-Look-Once version 3 (YOLOv3) and Single Shot Multibox Detector (SSD) methods. The number of target labeling was 80 landmarks. After the deep-learning process, the algorithms were tested using a new test data set composed of 283 images. Accuracy was determined by measuring the point-to-point error and success detection rate and was visualized by drawing scattergrams. The computational time of both algorithms was also recorded. Results: The YOLOv3 algorithm outperformed SSD in accuracy for 38 of 80 landmarks. The other 42 of 80 landmarks did not show a statistically significant difference between YOLOv3 and SSD. Error plots of YOLOv3 showed not only a smaller error range but also a more isotropic tendency. The mean computational time spent per image was 0.05 seconds and 2.89 seconds for YOLOv3 and SSD, respectively. YOLOv3 showed approximately 5% higher accuracy compared with the top benchmarks in the literature. Conclusions: Between the two latest deep-learning methods applied, YOLOv3 seemed to be more promising as a fully automated cephalometric landmark identification system for use in clinical practice.


Author(s):  
Nicolas Curin ◽  
Michael Kettler ◽  
Xi Kleisinger-Yu ◽  
Vlatka Komaric ◽  
Thomas Krabichler ◽  
...  

AbstractTo the best of our knowledge, the application of deep learning in the field of quantitative risk management is still a relatively recent phenomenon. In this article, we utilize techniques inspired by reinforcement learning in order to optimize the operation plans of underground natural gas storage facilities. We provide a theoretical framework and assess the performance of the proposed method numerically in comparison to a state-of-the-art least-squares Monte-Carlo approach. Due to the inherent intricacy originating from the high-dimensional forward market as well as the numerous constraints and frictions, the optimization exercise can hardly be tackled by means of traditional techniques.


2015 ◽  
Vol 26 (01) ◽  
pp. 1550010
Author(s):  
G. De Concini ◽  
D. De Martino

The uniform sampling of convex regions in high dimension is an important computational issue, both from theoretical and applied point of view. The hit-and-run Monte Carlo algorithms are the most efficient methods known to perform it and one of their bottlenecks relies in the difficulty of escaping from tight corners in high dimension. Inspired by optimized Monte Carlo methods used in statistical mechanics, we define a new algorithm by over-relaxing the hit-and-run dynamics. We made numerical simulations on high-dimensional simplexes and hypercubes in order to test its performances, pointing out its improved ability to escape from angles and finally apply it to an inference problem in the steady state dynamics of metabolic networks.


Sign in / Sign up

Export Citation Format

Share Document