scholarly journals A Bayesian neural network predicts the dissolution of compact planetary systems

2021 ◽  
Vol 118 (40) ◽  
pp. e2026053118
Author(s):  
Miles Cranmer ◽  
Daniel Tamayo ◽  
Hanno Rein ◽  
Peter Battaglia ◽  
Samuel Hadden ◽  
...  

We introduce a Bayesian neural network model that can accurately predict not only if, but also when a compact planetary system with three or more planets will go unstable. Our model, trained directly from short N-body time series of raw orbital elements, is more than two orders of magnitude more accurate at predicting instability times than analytical estimators, while also reducing the bias of existing machine learning algorithms by nearly a factor of three. Despite being trained on compact resonant and near-resonant three-planet configurations, the model demonstrates robust generalization to both nonresonant and higher multiplicity configurations, in the latter case outperforming models fit to that specific set of integrations. The model computes instability estimates up to 105 times faster than a numerical integrator, and unlike previous efforts provides confidence intervals on its predictions. Our inference model is publicly available in the SPOCK (https://github.com/dtamayo/spock) package, with training code open sourced (https://github.com/MilesCranmer/bnn_chaos_model).

2021 ◽  
Author(s):  
Mostafa Kiani Shahvandi ◽  
Benedikt Soja

<p>Graph neural networks are a newly established category of machine learning algorithms dealing with relational data. They can be used for the analysis of both spatial and/or temporal data. They are capable of modeling how time series of nodes, which are located at different spatial positions, change by the exchange of information between nodes and their neighbors. As a result, time series can be predicted to future epochs.</p><p>GNSS networks consist of stations at different locations, each producing time series of geodetic parameters, such as changes in their positions. In order to successfully apply graph neural networks to predict time series from GNSS networks, the physical properties of GNSS time series should be taken into account. Thus, we suggest a new graph neural network algorithm that has both a physical and a mathematical basis. The physical part is based on the fundamental concept of information exchange between nodes and their neighbors. Here, the temporal correlation between the changes of time series of the nodes and their neighbors is considered, which is computed by geophysical loading and/or climatic data. The mathematical part comes from the time series prediction by mathematical models, after the removal of trends and periodic effects using the singular spectrum analysis algorithm. In addition, it plays a role in the computation of the impact of neighboring nodes, based on the spatial correlation computed according to the pair-wise node-neighbor distance. The final prediction is the simple weighted summation of the predicted values of the time series of the node and those of its neighbors, in which weights are the multiplication of the spatial and temporal correlations.</p><p>In order to show the efficiency of the proposed algorithm, we considered a global network of more than 18000 GNSS stations and defined the neighbors of each node as stations that are located within the range of 10 km. We performed several different analyses, including the comparison between different machine learning algorithms and statistical methods for the time series prediction part, the impact of the type of data used for the computation of temporal correlation (climatic and/or geophysical loading), and comparison with other state-of-the-art graph neural network algorithms. We demonstrate the superiority of our method to the current graph neural network algorithms when applied to time series of geodetic networks. In addition, we show that the best machine learning algorithm to use within our graph neural network architecture is the multilayer perceptron, which shows an average of 0.34 mm in prediction accuracy. Furthermore, we find that the statistical methods have lower accuracies than machine learning ones, as much as 44 percent.</p>


mSphere ◽  
2019 ◽  
Vol 4 (3) ◽  
Author(s):  
Artur Yakimovich

ABSTRACT Artur Yakimovich works in the field of computational virology and applies machine learning algorithms to study host-pathogen interactions. In this mSphere of Influence article, he reflects on two papers “Holographic Deep Learning for Rapid Optical Screening of Anthrax Spores” by Jo et al. (Y. Jo, S. Park, J. Jung, J. Yoon, et al., Sci Adv 3:e1700606, 2017, https://doi.org/10.1126/sciadv.1700606) and “Bacterial Colony Counting with Convolutional Neural Networks in Digital Microbiology Imaging” by Ferrari and colleagues (A. Ferrari, S. Lombardi, and A. Signoroni, Pattern Recognition 61:629–640, 2017, https://doi.org/10.1016/j.patcog.2016.07.016). Here he discusses how these papers made an impact on him by showcasing that artificial intelligence algorithms can be equally applicable to both classical infection biology techniques and cutting-edge label-free imaging of pathogens.


2020 ◽  
pp. 1-12
Author(s):  
Cao Yanli

The research on the risk pricing of Internet finance online loans not only enriches the theory and methods of online loan pricing, but also helps to improve the level of online loan risk pricing. In order to improve the efficiency of Internet financial supervision, this article builds an Internet financial supervision system based on machine learning algorithms and improved neural network algorithms. Moreover, on the basis of factor analysis and discretization of loan data, this paper selects the relatively mature Logistic regression model to evaluate the credit risk of the borrower and considers the comprehensive management of credit risk and the matching with income. In addition, according to the relevant provisions of the New Basel Agreement on expected losses and economic capital, starting from the relevant factors, this article combines the credit risk assessment results to obtain relevant factors through regional research and conduct empirical analysis. The research results show that the model constructed in this paper has certain reliability.


2021 ◽  
Vol 13 (3) ◽  
pp. 67
Author(s):  
Eric Hitimana ◽  
Gaurav Bajpai ◽  
Richard Musabe ◽  
Louis Sibomana ◽  
Jayavel Kayalvizhi

Many countries worldwide face challenges in controlling building incidence prevention measures for fire disasters. The most critical issues are the localization, identification, detection of the room occupant. Internet of Things (IoT) along with machine learning proved the increase of the smartness of the building by providing real-time data acquisition using sensors and actuators for prediction mechanisms. This paper proposes the implementation of an IoT framework to capture indoor environmental parameters for occupancy multivariate time-series data. The application of the Long Short Term Memory (LSTM) Deep Learning algorithm is used to infer the knowledge of the presence of human beings. An experiment is conducted in an office room using multivariate time-series as predictors in the regression forecasting problem. The results obtained demonstrate that with the developed system it is possible to obtain, process, and store environmental information. The information collected was applied to the LSTM algorithm and compared with other machine learning algorithms. The compared algorithms are Support Vector Machine, Naïve Bayes Network, and Multilayer Perceptron Feed-Forward Network. The outcomes based on the parametric calibrations demonstrate that LSTM performs better in the context of the proposed application.


2021 ◽  
pp. 190-200
Author(s):  
Lesia Mochurad ◽  
Yaroslav Hladun

The paper considers the method for analysis of a psychophysical state of a person on psychomotor indicators – finger tapping test. The app for mobile phone that generalizes the classic tapping test is developed for experiments. Developed tool allows collecting samples and analyzing them like individual experiments and like dataset as a whole. The data based on statistical methods and optimization of hyperparameters is investigated for anomalies, and an algorithm for reducing their number is developed. The machine learning model is used to predict different features of the dataset. These experiments demonstrate the data structure obtained using finger tapping test. As a result, we gained knowledge of how to conduct experiments for better generalization of the model in future. A method for removing anomalies is developed and it can be used in further research to increase an accuracy of the model. Developed model is a multilayer recurrent neural network that works well with the classification of time series. Error of model learning on a synthetic dataset is 1.5% and on a real data from similar distribution is 5%.


Author(s):  
E. Yu. Shchetinin

The recognition of human emotions is one of the most relevant and dynamically developing areas of modern speech technologies, and the recognition of emotions in speech (RER) is the most demanded part of them. In this paper, we propose a computer model of emotion recognition based on an ensemble of bidirectional recurrent neural network with LSTM memory cell and deep convolutional neural network ResNet18. In this paper, computer studies of the RAVDESS database containing emotional speech of a person are carried out. RAVDESS-a data set containing 7356 files. Entries contain the following emotions: 0 – neutral, 1 – calm, 2 – happiness, 3 – sadness, 4 – anger, 5 – fear, 6 – disgust, 7 – surprise. In total, the database contains 16 classes (8 emotions divided into male and female) for a total of 1440 samples (speech only). To train machine learning algorithms and deep neural networks to recognize emotions, existing audio recordings must be pre-processed in such a way as to extract the main characteristic features of certain emotions. This was done using Mel-frequency cepstral coefficients, chroma coefficients, as well as the characteristics of the frequency spectrum of audio recordings. In this paper, computer studies of various models of neural networks for emotion recognition are carried out on the example of the data described above. In addition, machine learning algorithms were used for comparative analysis. Thus, the following models were trained during the experiments: logistic regression (LR), classifier based on the support vector machine (SVM), decision tree (DT), random forest (RF), gradient boosting over trees – XGBoost, convolutional neural network CNN, recurrent neural network RNN (ResNet18), as well as an ensemble of convolutional and recurrent networks Stacked CNN-RNN. The results show that neural networks showed much higher accuracy in recognizing and classifying emotions than the machine learning algorithms used. Of the three neural network models presented, the CNN + BLSTM ensemble showed higher accuracy.


Author(s):  
Gudipally Chandrashakar

In this article, we used historical time series data up to the current day gold price. In this study of predicting gold price, we consider few correlating factors like silver price, copper price, standard, and poor’s 500 value, dollar-rupee exchange rate, Dow Jones Industrial Average Value. Considering the prices of every correlating factor and gold price data where dates ranging from 2008 January to 2021 February. Few algorithms of machine learning are used to analyze the time-series data are Random Forest Regression, Support Vector Regressor, Linear Regressor, ExtraTrees Regressor and Gradient boosting Regression. While seeing the results the Extra Tree Regressor algorithm gives the predicted value of gold prices more accurately.


2017 ◽  
Vol 71 (1) ◽  
pp. 169-188 ◽  
Author(s):  
E. Shafiee ◽  
M. R. Mosavi ◽  
M. Moazedi

The importance of the Global Positioning System (GPS) and related electronic systems continues to increase in a range of environmental, engineering and navigation applications. However, civilian GPS signals are vulnerable to Radio Frequency (RF) interference. Spoofing is an intentional intervention that aims to force a GPS receiver to acquire and track invalid navigation data. Analysis of spoofing and authentic signal patterns represents the differences as phase, energy and imaginary components of the signal. In this paper, early-late phase, delta, and signal level as the three main features are extracted from the correlation output of the tracking loop. Using these features, spoofing detection can be performed by exploiting conventional machine learning algorithms such as K-Nearest Neighbourhood (KNN) and naive Bayesian classifier. A Neural Network (NN) as a learning machine is a modern computational method for collecting the required knowledge and predicting the output values in complicated systems. This paper presents a new approach for GPS spoofing detection based on multi-layer NN whose inputs are indices of features. Simulation results on a software GPS receiver showed adequate detection accuracy was obtained from NN with a short detection time.


2020 ◽  
Author(s):  
Florian Dupuy ◽  
Olivier Mestre ◽  
Léo Pfitzner

<p>Cloud cover is a crucial information for many applications such as planning land observation missions from space. However, cloud cover remains a challenging variable to forecast, and Numerical Weather Prediction (NWP) models suffer from significant biases, hence justifying the use of statistical post-processing techniques. In our application, the ground truth is a gridded cloud cover product derived from satellite observations over Europe, and predictors are spatial fields of various variables produced by ARPEGE (Météo-France global NWP) at the corresponding lead time.</p><p>In this study, ARPEGE cloud cover is post-processed using a convolutional neural network (CNN). CNN is the most popular machine learning tool to deal with images. In our case, CNN allows to integrate spatial information contained in NWP outputs. We show that a simple U-Net architecture produces significant improvements over Europe. Compared to the raw ARPEGE forecasts, MAE drops from 25.1 % to 17.8 % and RMSE decreases from 37.0 % to 31.6 %. Considering specific needs for earth observation, special interest was put on forecasts with low cloud cover conditions (< 10 %). For this particular nebulosity class, we show that hit rate jumps from 40.6 to 70.7 (which is the order of magnitude of what can be achieved using classical machine learning algorithms such as random forests) while false alarm decreases from 38.2 to 29.9. This is an excellent result, since improving hit rates by means of random forests usually also results in a slight increase of false alarms.</p>


Sign in / Sign up

Export Citation Format

Share Document