scholarly journals A new method for detecting P300 signals by using deep learning: Hyperparameter tuning in high-dimensional space by minimizing nonconvex error function

2018 ◽  
Vol 8 (4) ◽  
pp. 205 ◽  
Author(s):  
SeyedVahab Shojaedini ◽  
Sajedeh Morabbi ◽  
MohammadReza Keyvanpour
2021 ◽  
Author(s):  
Thamer Alsulaimani ◽  
Mary Wheeler

Abstract Reservoir simulation is the most widely used tool for oil and gas production forecasting and reservoir management. Solving a large-scale system of nonlinear differential equations every timestep can be computationally expensive. In this work, we present a two-phase physics-constrained deep-learning reduced-order model as a surrogate model for subsurface flow production forecast. The implemented deep learning model is a physics-guided encoder-decoder, constructed based on the Embed-to-Control (E2C) framework. In our implementation, the E2C works in a way analogous to Proper Orthogonal Decomposition combined with Discrete Empirical Interpolation Method (POD-DEIM) or Trajectory Piece-Wise Linearization approach (POD-TPWL). The E2C-Reduced-order model (ROM) involves projecting the system from a high-dimensional space into a low-dimensional subspace using the encoder-decoder, approximating the progression of the system from one timestep to the next using a linear transition model, and finally projecting the system back to high-dimensional space using the encoder-decoder. To guarantee mass conservation, we adopt the Finite Elements Mixed Formulation in the neural network's loss function combined with the original data-based loss function. Training simulations are generated using a full-physics reservoir simulator (IPARS). High-fidelity pressure, velocity, and saturation solution snapshot at constant time intervals are taken as training input to the neural network. After training, the model is tested over large variations of well control settings. Accurate pressure and saturation solutions are predicted along with the injection and production well quantities using the proposed approach. Errors in the predicted quantities of interest are reduced with the increase in the number of training simulations used. Although it required a large number of training simulations for the offline (training) step, the model achieved a significant speedup in the online stage compared to the full physics model. Considering the overall computational cost, this ROM model is proper for cases when a large number of simulations are required like in the case of production optimization and uncertainty assessments. The proposed approach shows the capability of the deep-learning reduced-order model to accurately predict multiphase flow behavior such as well quantities, and global pressure and saturation fields. The model honors mass conservation and the underlying physics laws, which many existing approaches don't take into direct consideration.


2021 ◽  
pp. 1-12
Author(s):  
Jian Zheng ◽  
Jianfeng Wang ◽  
Yanping Chen ◽  
Shuping Chen ◽  
Jingjin Chen ◽  
...  

Neural networks can approximate data because of owning many compact non-linear layers. In high-dimensional space, due to the curse of dimensionality, data distribution becomes sparse, causing that it is difficulty to provide sufficient information. Hence, the task becomes even harder if neural networks approximate data in high-dimensional space. To address this issue, according to the Lipschitz condition, the two deviations, i.e., the deviation of the neural networks trained using high-dimensional functions, and the deviation of high-dimensional functions approximation data, are derived. This purpose of doing this is to improve the ability of approximation high-dimensional space using neural networks. Experimental results show that the neural networks trained using high-dimensional functions outperforms that of using data in the capability of approximation data in high-dimensional space. We find that the neural networks trained using high-dimensional functions more suitable for high-dimensional space than that of using data, so that there is no need to retain sufficient data for neural networks training. Our findings suggests that in high-dimensional space, by tuning hidden layers of neural networks, this is hard to have substantial positive effects on improving precision of approximation data.


2001 ◽  
Vol 24 (3) ◽  
pp. 305-320 ◽  
Author(s):  
Benoit Lemaire ◽  
Philippe Dessus

This paper presents Apex, a system that can automatically assess a student essay based on its content. It relies on Latent Semantic Analysis, a tool which is used to represent the meaning of words as vectors in a high-dimensional space. By comparing an essay and the text of a given course on a semantic basis, our system can measure how well the essay matches the text. Various assessments are presented to the student regarding the topic, the outline and the coherence of the essay. Our experiments yield promising results.


Author(s):  
Jian Zheng ◽  
Jianfeng Wang ◽  
Yanping Chen ◽  
Shuping Chen ◽  
Jingjin Chen ◽  
...  

2018 ◽  
Vol 24 (4) ◽  
pp. 225-247 ◽  
Author(s):  
Xavier Warin

Abstract A new method based on nesting Monte Carlo is developed to solve high-dimensional semi-linear PDEs. Depending on the type of non-linearity, different schemes are proposed and theoretically studied: variance error are given and it is shown that the bias of the schemes can be controlled. The limitation of the method is that the maturity or the Lipschitz constants of the non-linearity should not be too high in order to avoid an explosion of the computational time. Many numerical results are given in high dimension for cases where analytical solutions are available or where some solutions can be computed by deep-learning methods.


Sign in / Sign up

Export Citation Format

Share Document