bayesian neural network
Recently Published Documents


TOTAL DOCUMENTS

244
(FIVE YEARS 115)

H-INDEX

25
(FIVE YEARS 5)

2022 ◽  
Vol 105 (1) ◽  
Author(s):  
Xiao-Xu Dong ◽  
Rong An ◽  
Jun-Xu Lu ◽  
Li-Sheng Geng

2022 ◽  
pp. 225-236
Author(s):  
Aatif Jamshed ◽  
Asmita Dixit

Bitcoin has gained a tremendous amount of attention lately because of the innate nature of entering cryptographic technologies and money-related units in the fields of banking, cybersecurity, and software engineering. This chapter investigates the effect of Bayesian neural structures or networks (BNNs) with the aid of manipulating the Bitcoin process's timetable. The authors also choose the maximum extensive highlights from Blockchain records that are carefully applied to Bitcoin's marketplace hobby and use it to create templates to enhance the influential display of the new Bitcoin evaluation process. They endorse actual inspection to check and expect the Bitcoin technique, which compares the Bayesian neural network and other clean and non-direct comparison models. The exact tests show that BNN works well for undertaking the Bitcoin price schedule and explain the intense unpredictability of Bitcoin's actual rate.


2021 ◽  
Vol 15 (1) ◽  
pp. 1-20
Author(s):  
Ki Uhn Ahn ◽  
Cheol Soo Park ◽  
Kyung-Jae Kim ◽  
Deuk-Woo Kim ◽  
Chang-U Chae

2021 ◽  
Author(s):  
Matt Amos ◽  
Ushnish Sengupta ◽  
Paul Young ◽  
J. Hosking

Continuous historic datasets of vertically resolved stratospheric ozone, support the case for ozone recovery, are necessary for the running of offline models and increase understanding of the impacts of ozone on the wider atmospheric system. Vertically resolved ozone datasets are typically constructed from multiple satellite, sonde and ground-based measurements that do not provide continuous coverage. As a result, several methods have been used to infill these gaps, most commonly relying on regression against observed time series. However, these existing methods either provide low accuracy infilling especially over polar regions, unphysical extrapolation, or an incomplete estimation of uncertainty. To address these methodological shortcomings we used and further developed an infilling framework that fuses observations with output from an ensemble of chemistry-climate models within a Bayesian neural network. We used this deep learning framework to produce a continuous record of vertically resolved ozone with uncertainty estimates. Under rigorous testing the infilling framework extrapolated and interpolated skillfully and maintained realistic interannual variability due to the inclusion of physically and chemically realistic models. This framework and the ozone dataset it produced, enables a more thorough investigation of vertically resolved trends throughout the atmosphere.


2021 ◽  
Author(s):  
◽  
Mashall Aryan

<p>The solution to many science and engineering problems includes identifying the minimum or maximum of an unknown continuous function whose evaluation inflicts non-negligible costs in terms of resources such as money, time, human attention or computational processing. In such a case, the choice of new points to evaluate is critical. A successful approach has been to choose these points by considering a distribution over plausible surfaces, conditioned on all previous points and their evaluations. In this sequential bi-step strategy, also known as Bayesian Optimization, first a prior is defined over possible functions and updated to a posterior in the light of available observations. Then using this posterior, namely the surrogate model, an infill criterion is formed and utilized to find the next location to sample from. By far the most common prior distribution and infill criterion are Gaussian Process and Expected Improvement, respectively.    The popularity of Gaussian Processes in Bayesian optimization is partially due to their ability to represent the posterior in closed form. Nevertheless, the Gaussian Process is afflicted with several shortcomings that directly affect its performance. For example, inference scales poorly with the amount of data, numerical stability degrades with the number of data points, and strong assumptions about the observation model are required, which might not be consistent with reality. These drawbacks encourage us to seek better alternatives. This thesis studies the application of Neural Networks to enhance Bayesian Optimization. It proposes several Bayesian optimization methods that use neural networks either as their surrogates or in the infill criterion.    This thesis introduces a novel Bayesian Optimization method in which Bayesian Neural Networks are used as a surrogate. This has reduced the computational complexity of inference in surrogate from cubic (on the number of observation) in GP to linear. Different variations of Bayesian Neural Networks (BNN) are put into practice and inferred using a Monte Carlo sampling. The results show that Monte Carlo Bayesian Neural Network surrogate could performed better than, or at least comparably to the Gaussian Process-based Bayesian optimization methods on a set of benchmark problems.  This work develops a fast Bayesian Optimization method with an efficient surrogate building process. This new Bayesian Optimization algorithm utilizes Bayesian Random-Vector Functional Link Networks as surrogate. In this family of models the inference is only performed on a small subset of the entire model parameters and the rest are randomly drawn from a prior. The proposed methods are tested on a set of benchmark continuous functions and hyperparameter optimization problems and the results show the proposed methods are competitive with state-of-the-art Bayesian Optimization methods.  This study proposes a novel Neural network-based infill criterion. In this method locations to sample from are found by minimizing the joint conditional likelihood of the new point and parameters of a neural network. The results show that in Bayesian Optimization methods with Bayesian Neural Network surrogates, this new infill criterion outperforms the expected improvement.   Finally, this thesis presents order-preserving generative models and uses it in a variational Bayesian context to infer Implicit Variational Bayesian Neural Network (IVBNN) surrogates for a new Bayesian Optimization. This new inference mechanism is more efficient and scalable than Monte Carlo sampling. The results show that IVBNN could outperform Monte Carlo BNN in Bayesian optimization of hyperparameters of machine learning models.</p>


2021 ◽  
Vol 16 (23) ◽  
pp. 216-232
Author(s):  
Khaoula Mrhar ◽  
Lamia Benhiba ◽  
Samir Bourekkache ◽  
Mounia Abik

Massive Open Online Courses (MOOCs) are increasingly used by learn-ers to acquire knowledge and develop new skills. MOOCs provide a trove of data that can be leveraged to better assist learners, including behavioral data from built-in collaborative tools such as discussion boards and course wikis. Data tracing social interactions among learners are especially inter-esting as their analyses help improve MOOCs’ effectiveness. We particular-ly perform sentiment analysis on such data to predict learners at risk of dropping out, measure the success of the MOOC, and personalize the MOOC according to a learner’s behavior and detected emotions. In this pa-per, we propose a novel approach to sentiment analysis that combines the advantages of the deep learning architectures CNN and LSTM. To avoid highly uncertain predictions, we utilize a Bayesian neural network (BNN) model to quantify uncertainty within the sentiment analysis task. Our em-pirical results indicate that: 1) The Bayesian CNN-LSTM model provides interesting performance compared to other models (CNN-LSTM, CNN, LSTM) in terms of accuracy, precision, recall, and F1-Score; and 2) there is a high correlation between the sentiment in forum posts and the dropout rate in MOOCs.


2021 ◽  
Author(s):  
◽  
Mashall Aryan

<p>The solution to many science and engineering problems includes identifying the minimum or maximum of an unknown continuous function whose evaluation inflicts non-negligible costs in terms of resources such as money, time, human attention or computational processing. In such a case, the choice of new points to evaluate is critical. A successful approach has been to choose these points by considering a distribution over plausible surfaces, conditioned on all previous points and their evaluations. In this sequential bi-step strategy, also known as Bayesian Optimization, first a prior is defined over possible functions and updated to a posterior in the light of available observations. Then using this posterior, namely the surrogate model, an infill criterion is formed and utilized to find the next location to sample from. By far the most common prior distribution and infill criterion are Gaussian Process and Expected Improvement, respectively.    The popularity of Gaussian Processes in Bayesian optimization is partially due to their ability to represent the posterior in closed form. Nevertheless, the Gaussian Process is afflicted with several shortcomings that directly affect its performance. For example, inference scales poorly with the amount of data, numerical stability degrades with the number of data points, and strong assumptions about the observation model are required, which might not be consistent with reality. These drawbacks encourage us to seek better alternatives. This thesis studies the application of Neural Networks to enhance Bayesian Optimization. It proposes several Bayesian optimization methods that use neural networks either as their surrogates or in the infill criterion.    This thesis introduces a novel Bayesian Optimization method in which Bayesian Neural Networks are used as a surrogate. This has reduced the computational complexity of inference in surrogate from cubic (on the number of observation) in GP to linear. Different variations of Bayesian Neural Networks (BNN) are put into practice and inferred using a Monte Carlo sampling. The results show that Monte Carlo Bayesian Neural Network surrogate could performed better than, or at least comparably to the Gaussian Process-based Bayesian optimization methods on a set of benchmark problems.  This work develops a fast Bayesian Optimization method with an efficient surrogate building process. This new Bayesian Optimization algorithm utilizes Bayesian Random-Vector Functional Link Networks as surrogate. In this family of models the inference is only performed on a small subset of the entire model parameters and the rest are randomly drawn from a prior. The proposed methods are tested on a set of benchmark continuous functions and hyperparameter optimization problems and the results show the proposed methods are competitive with state-of-the-art Bayesian Optimization methods.  This study proposes a novel Neural network-based infill criterion. In this method locations to sample from are found by minimizing the joint conditional likelihood of the new point and parameters of a neural network. The results show that in Bayesian Optimization methods with Bayesian Neural Network surrogates, this new infill criterion outperforms the expected improvement.   Finally, this thesis presents order-preserving generative models and uses it in a variational Bayesian context to infer Implicit Variational Bayesian Neural Network (IVBNN) surrogates for a new Bayesian Optimization. This new inference mechanism is more efficient and scalable than Monte Carlo sampling. The results show that IVBNN could outperform Monte Carlo BNN in Bayesian optimization of hyperparameters of machine learning models.</p>


Sign in / Sign up

Export Citation Format

Share Document