scholarly journals Semantic Information G Theory and Logical Bayesian Inference for Machine Learning

Information ◽  
2019 ◽  
Vol 10 (8) ◽  
pp. 261 ◽  
Author(s):  
Lu

An important problem in machine learning is that, when using more than two labels, it is very difficult to construct and optimize a group of learning functions that are still useful when the prior distribution of instances is changed. To resolve this problem, semantic information G theory, Logical Bayesian Inference (LBI), and a group of Channel Matching (CM) algorithms are combined to form a systematic solution. A semantic channel in G theory consists of a group of truth functions or membership functions. In comparison with the likelihood functions, Bayesian posteriors, and Logistic functions that are typically used in popular methods, membership functions are more convenient to use, providing learning functions that do not suffer the above problem. In Logical Bayesian Inference (LBI), every label is independently learned. For multilabel learning, we can directly obtain a group of optimized membership functions from a large enough sample with labels, without preparing different samples for different labels. Furthermore, a group of Channel Matching (CM) algorithms are developed for machine learning. For the Maximum Mutual Information (MMI) classification of three classes with Gaussian distributions in a two-dimensional feature space,only 2–3 iterations are required for the mutual information between three classes and three labels to surpass 99% of the MMI for most initial partitions For mixture models, the Expectation-Maximization (EM) algorithm is improved to form the CM-EM algorithm, which can outperform the EM algorithm when the mixture ratios are imbalanced, or when local convergence exists. The CM iteration algorithm needs to combine with neural networks for MMI classification in high-dimensional feature spaces. LBI needs further investigation for the unification of statistics and logic.

Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 1050
Author(s):  
Chenguang Lu

In the rate-distortion function and the Maximum Entropy (ME) method, Minimum Mutual Information (MMI) distributions and ME distributions are expressed by Bayes-like formulas, including Negative Exponential Functions (NEFs) and partition functions. Why do these non-probability functions exist in Bayes-like formulas? On the other hand, the rate-distortion function has three disadvantages: (1) the distortion function is subjectively defined; (2) the definition of the distortion function between instances and labels is often difficult; (3) it cannot be used for data compression according to the labels’ semantic meanings. The author has proposed using the semantic information G measure with both statistical probability and logical probability before. We can now explain NEFs as truth functions, partition functions as logical probabilities, Bayes-like formulas as semantic Bayes’ formulas, MMI as Semantic Mutual Information (SMI), and ME as extreme ME minus SMI. In overcoming the above disadvantages, this paper sets up the relationship between truth functions and distortion functions, obtains truth functions from samples by machine learning, and constructs constraint conditions with truth functions to extend rate-distortion functions. Two examples are used to help readers understand the MMI iteration and to support the theoretical results. Using truth functions and the semantic information G measure, we can combine machine learning and data compression, including semantic compression. We need further studies to explore general data compression and recovery, according to the semantic meaning.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jin-Woong Lee ◽  
Chaewon Park ◽  
Byung Do Lee ◽  
Joonseo Park ◽  
Nam Hoon Goo ◽  
...  

AbstractPredicting mechanical properties such as yield strength (YS) and ultimate tensile strength (UTS) is an intricate undertaking in practice, notwithstanding a plethora of well-established theoretical and empirical models. A data-driven approach should be a fundamental exercise when making YS/UTS predictions. For this study, we collected 16 descriptors (attributes) that implicate the compositional and processing information and the corresponding YS/UTS values for 5473 thermo-mechanically controlled processed (TMCP) steel alloys. We set up an integrated machine-learning (ML) platform consisting of 16 ML algorithms to predict the YS/UTS based on the descriptors. The integrated ML platform involved regularization-based linear regression algorithms, ensemble ML algorithms, and some non-linear ML algorithms. Despite the dirty nature of most real-world industry data, we obtained acceptable holdout dataset test results such as R2 > 0.6 and MSE < 0.01 for seven non-linear ML algorithms. The seven fully trained non-linear ML models were used for the ensuing ‘inverse design (prediction)’ based on an elitist-reinforced, non-dominated sorting genetic algorithm (NSGA-II). The NSGA-II enabled us to predict solutions that exhibit desirable YS/UTS values for each ML algorithm. In addition, the NSGA-II-driven solutions in the 16-dimensional input feature space were visualized using holographic research strategy (HRS) in order to systematically compare and analyze the inverse-predicted solutions for each ML algorithm.


2019 ◽  
Vol 29 (07) ◽  
pp. 1850058 ◽  
Author(s):  
Juan M. Górriz ◽  
Javier Ramírez ◽  
F. Segovia ◽  
Francisco J. Martínez ◽  
Meng-Chuan Lai ◽  
...  

Although much research has been undertaken, the spatial patterns, developmental course, and sexual dimorphism of brain structure associated with autism remains enigmatic. One of the difficulties in investigating differences between the sexes in autism is the small sample sizes of available imaging datasets with mixed sex. Thus, the majority of the investigations have involved male samples, with females somewhat overlooked. This paper deploys machine learning on partial least squares feature extraction to reveal differences in regional brain structure between individuals with autism and typically developing participants. A four-class classification problem (sex and condition) is specified, with theoretical restrictions based on the evaluation of a novel upper bound in the resubstitution estimate. These conditions were imposed on the classifier complexity and feature space dimension to assure generalizable results from the training set to test samples. Accuracies above [Formula: see text] on gray and white matter tissues estimated from voxel-based morphometry (VBM) features are obtained in a sample of equal-sized high-functioning male and female adults with and without autism ([Formula: see text], [Formula: see text]/group). The proposed learning machine revealed how autism is modulated by biological sex using a low-dimensional feature space extracted from VBM. In addition, a spatial overlap analysis on reference maps partially corroborated predictions of the “extreme male brain” theory of autism, in sexual dimorphic areas.


2021 ◽  
Author(s):  
Rogini Runghen ◽  
Daniel B Stouffer ◽  
Giulio Valentino Dalla Riva

Collecting network interaction data is difficult. Non-exhaustive sampling and complex hidden processes often result in an incomplete data set. Thus, identifying potentially present but unobserved interactions is crucial both in understanding the structure of large scale data, and in predicting how previously unseen elements will interact. Recent studies in network analysis have shown that accounting for metadata (such as node attributes) can improve both our understanding of how nodes interact with one another, and the accuracy of link prediction. However, the dimension of the object we need to learn to predict interactions in a network grows quickly with the number of nodes. Therefore, it becomes computationally and conceptually challenging for large networks. Here, we present a new predictive procedure combining a graph embedding method with machine learning techniques to predict interactions on the base of nodes' metadata. Graph embedding methods project the nodes of a network onto a---low dimensional---latent feature space. The position of the nodes in the latent feature space can then be used to predict interactions between nodes. Learning a mapping of the nodes' metadata to their position in a latent feature space corresponds to a classic---and low dimensional---machine learning problem. In our current study we used the Random Dot Product Graph model to estimate the embedding of an observed network, and we tested different neural networks architectures to predict the position of nodes in the latent feature space. Flexible machine learning techniques to map the nodes onto their latent positions allow to account for multivariate and possibly complex nodes' metadata. To illustrate the utility of the proposed procedure, we apply it to a large dataset of tourist visits to destinations across New Zealand. We found that our procedure accurately predicts interactions for both existing nodes and nodes newly added to the network, while being computationally feasible even for very large networks. Overall, our study highlights that by exploiting the properties of a well understood statistical model for complex networks and combining it with standard machine learning techniques, we can simplify the link prediction problem when incorporating multivariate node metadata. Our procedure can be immediately applied to different types of networks, and to a wide variety of data from different systems. As such, both from a network science and data science perspective, our work offers a flexible and generalisable procedure for link prediction.


2021 ◽  
Author(s):  
Rammohan Shukla ◽  
Nicholas D Henkel ◽  
Marissa A Smail ◽  
Xiaojun Wu ◽  
Heather A Enright ◽  
...  

We probed a transcriptomic dataset of pilocarpine-induced TLE using various ontological, machine-learning, and systems-biology approaches. We showed that, underneath the complex and penetrant changes, moderate-to-subtle upregulated homeostatic and downregulated synaptic changes associated with the dentate gyrus and hippocampal subfields could not only predict TLE but various other forms of epilepsy. At the cellular level, pyramidal neurons and interneurons showed disparate changes, whereas the proportion of non-neuronal cells increased steadily. A probabilistic Bayesian network demonstrated an aberrant and oscillating physiological interaction between oligodendrocytes and interneurons in driving seizures. Validating the Bayesian inference, we showed that the cell types driving the seizures were associated with known antiepileptic and epileptic drugs. These findings provide predictive biomarkers of epilepsy, insights into the cellular connections and causal changes associated with TLE, and a drug discovery method focusing on these events.


2020 ◽  
Author(s):  
Trang T. Le ◽  
Jason H. Moore

AbstractSummarytreeheatr is an R package for creating interpretable decision tree visualizations with the data represented as a heatmap at the tree’s leaf nodes. The integrated presentation of the tree structure along with an overview of the data efficiently illustrates how the tree nodes split up the feature space and how well the tree model performs. This visualization can also be examined in depth to uncover the correlation structure in the data and importance of each feature in predicting the outcome. Implemented in an easily installed package with a detailed vignette, treeheatr can be a useful teaching tool to enhance students’ understanding of a simple decision tree model before diving into more complex tree-based machine learning methods.AvailabilityThe treeheatr package is freely available under the permissive MIT license at https://trang1618.github.io/treeheatr and https://cran.r-project.org/package=treeheatr. It comes with a detailed vignette that is automatically built with GitHub Actions continuous [email protected]


2021 ◽  
Author(s):  
David Dempsey ◽  
Shane Cronin ◽  
Andreas Kempa-Liehr ◽  
Martin Letourneur

&lt;p&gt;Sudden steam-driven eruptions at tourist volcanoes were the cause of 63 deaths at Mt Ontake (Japan) in 2014, and 22 deaths at Whakaari (New Zealand) in 2019. Warning systems that can anticipate these eruptions could provide crucial hours for evacuation or sheltering but these require reliable forecasting. Recently, machine learning has been used to extract eruption precursors from observational data and train forecasting models. However, a weakness of this data-driven approach is its reliance on long observational records that span multiple eruptions. As many volcano datasets may only record one or no eruptions, there is a need to extend these techniques to data-poor locales.&lt;/p&gt;&lt;p&gt;Transfer machine learning is one approach for generalising lessons learned at data-rich volcanoes and applying them to data-poor ones. Here, we tackle two problems: (1) generalising time series features between seismic stations at Whakaari to address recording gaps, and (2) training a forecasting model for Mt Ruapehu augmented using data from Whakaari. This required that we standardise data records at different stations for direct comparisons, devise an interpolation scheme to fill in missing eruption data, and combine volcano-specific feature matrices prior to model training.&lt;/p&gt;&lt;p&gt;We trained a forecast model for Whakaari using tremor data from three eruptions recorded at one seismic station (WSRZ) and augmented by data from two other eruptions recorded at a second station (WIZ). First, the training data from both stations were standardised to a unit normal distribution in log space. Then, linear interpolation in feature space was used to infer missing eruption features at WSRZ. Under pseudo-prospective testing, the augmented model had similar forecasting skill to one trained using all five eruptions recorded at a single station (WIZ). However, extending this approach to Ruapehu, we saw reduced performance indicating that more work is needed in standardisation and feature selection.&lt;/p&gt;


2021 ◽  
Author(s):  
Mikhail Kanevski

&lt;p&gt;Nowadays a wide range of methods and tools to study and forecast time series is available. An important problem in forecasting concerns embedding of time series, i.e. construction of a high dimensional space where forecasting problem is considered as a regression task. There are several basic linear and nonlinear approaches of constructing such space by defining an optimal delay vector using different theoretical concepts. Another way is to consider this space as an input feature space &amp;#8211; IFS, and to apply machine learning feature selection (FS) algorithms to optimize IFS according to the problem under study (analysis, modelling or forecasting). Such approach is an empirical one: it is based on data and depends on the FS algorithms applied. In machine learning features are generally classified as relevant, redundant and irrelevant. It gives a reach possibility to perform advanced multivariate time series exploration and development of interpretable predictive models.&lt;/p&gt;&lt;p&gt;Therefore, in the present research different FS algorithms are used to analyze fundamental properties of time series from empirical point of view. Linear and nonlinear simulated time series are studied in detail to understand the advantages and drawbacks of the proposed approach. Real data case studies deal with air pollution and wind speed times series. Preliminary results are quite promising and more research is in progress.&lt;/p&gt;


2021 ◽  
Vol 163 (A3) ◽  
Author(s):  
B Shabani ◽  
J Ali-Lavroff ◽  
D S Holloway ◽  
S Penev ◽  
D Dessi ◽  
...  

An onboard monitoring system can measure features such as stress cycles counts and provide warnings due to slamming. Considering current technology trends there is the opportunity of incorporating machine learning methods into monitoring systems. A hull monitoring system has been developed and installed on a 111 m wave piercing catamaran (Hull 091) to remotely monitor the ship kinematics and hull structural responses. Parallel to that, an existing dataset of a similar vessel (Hull 061) was analysed using unsupervised and supervised learning models; these were found to be beneficial for the classification of bow entry events according to key kinematic parameters. A comparison of different algorithms including linear support vector machines, naïve Bayes and decision tree for the bow entry classification were conducted. In addition, using empirical probability distributions, the likelihood of wet-deck slamming was estimated given a vertical bow acceleration threshold of 1  in head seas, clustering the feature space with the approximate probabilities of 0.001, 0.030 and 0.25.


Sign in / Sign up

Export Citation Format

Share Document