scholarly journals Machine learning of higher order programs

Author(s):  
Ganesh Baliga ◽  
John Case ◽  
Sanjay Jain ◽  
Mandayam Suraj

2019 ◽  
Vol 20 (1) ◽  
pp. 221-256 ◽  
Author(s):  
Helen Nissenbaum

Abstract According to the theory of contextual integrity (CI), privacy norms prescribe information flows with reference to five parameters — sender, recipient, subject, information type, and transmission principle. Because privacy is grasped contextually (e.g., health, education, civic life, etc.), the values of these parameters range over contextually meaningful ontologies — of information types (or topics) and actors (subjects, senders, and recipients), in contextually defined capacities. As an alternative to predominant approaches to privacy, which were ineffective against novel information practices enabled by IT, CI was able both to pinpoint sources of disruption and provide grounds for either accepting or rejecting them. Mounting challenges from a burgeoning array of networked, sensor-enabled devices (IoT) and data-ravenous machine learning systems, similar in form though magnified in scope, call for renewed attention to theory. This Article introduces the metaphor of a data (food) chain to capture the nature of these challenges. With motion up the chain, where data of higher order is inferred from lower-order data, the crucial question is whether privacy norms governing lower-order data are sufficient for the inferred higher-order data. While CI has a response to this question, a greater challenge comes from data primitives, such as digital impulses of mouse clicks, motion detectors, and bare GPS coordinates, because they appear to have no meaning. Absent a semantics, they escape CI’s privacy norms entirely.



2020 ◽  
Vol 25 (3) ◽  
pp. 58
Author(s):  
Minh Nguyen ◽  
Mehmet Aktas ◽  
Esra Akbas

The growth of social media in recent years has contributed to an ever-increasing network of user data in every aspect of life. This volume of generated data is becoming a vital asset for the growth of companies and organizations as a powerful tool to gain insights and make crucial decisions. However, data is not always reliable, since primarily, it can be manipulated and disseminated from unreliable sources. In the field of social network analysis, this problem can be tackled by implementing machine learning models that can learn to classify between humans and bots, which are mostly harmful computer programs exploited to shape public opinions and circulate false information on social media. In this paper, we propose a novel topological feature extraction method for bot detection on social networks. We first create weighted ego networks of each user. We then encode the higher-order topological features of ego networks using persistent homology. Finally, we use these extracted features to train a machine learning model and use that model to classify users as bot vs. human. Our experimental results suggest that using the higher-order topological features coming from persistent homology is promising in bot detection and more effective than using classical graph-theoretic structural features.



Cortex ◽  
2019 ◽  
Vol 121 ◽  
pp. 308-321 ◽  
Author(s):  
Christoph Sperber ◽  
Daniel Wiesen ◽  
Georg Goldenberg ◽  
Hans-Otto Karnath


2020 ◽  
Vol 34 (04) ◽  
pp. 4527-4534
Author(s):  
Sören Laue ◽  
Matthias Mitterreiter ◽  
Joachim Giesen

Computing derivatives of tensor expressions, also known as tensor calculus, is a fundamental task in machine learning. A key concern is the efficiency of evaluating the expressions and their derivatives that hinges on the representation of these expressions. Recently, an algorithm for computing higher order derivatives of tensor expressions like Jacobians or Hessians has been introduced that is a few orders of magnitude faster than previous state-of-the-art approaches. Unfortunately, the approach is based on Ricci notation and hence cannot be incorporated into automatic differentiation frameworks like TensorFlow, PyTorch, autograd, or JAX that use the simpler Einstein notation. This leaves two options, to either change the underlying tensor representation in these frameworks or to develop a new, provably correct algorithm based on Einstein notation. Obviously, the first option is impractical. Hence, we pursue the second option. Here, we show that using Ricci notation is not necessary for an efficient tensor calculus and develop an equally efficient method for the simpler Einstein notation. It turns out that turning to Einstein notation enables further improvements that lead to even better efficiency.



Entropy ◽  
2018 ◽  
Vol 20 (11) ◽  
pp. 840 ◽  
Author(s):  
Frédéric Barbaresco

We introduce poly-symplectic extension of Souriau Lie groups thermodynamics based on higher-order model of statistical physics introduced by Ingarden. This extended model could be used for small data analytics and machine learning on Lie groups. Souriau geometric theory of heat is well adapted to describe density of probability (maximum entropy Gibbs density) of data living on groups or on homogeneous manifolds. For small data analytics (rarified gases, sparse statistical surveys, …), the density of maximum entropy should consider higher order moments constraints (Gibbs density is not only defined by first moment but fluctuations request 2nd order and higher moments) as introduced by Ingarden. We use a poly-sympletic model introduced by Christian Günther, replacing the symplectic form by a vector-valued form. The poly-symplectic approach generalizes the Noether theorem, the existence of moment mappings, the Lie algebra structure of the space of currents, the (non-)equivariant cohomology and the classification of G-homogeneous systems. The formalism is covariant, i.e., no special coordinates or coordinate systems on the parameter space are used to construct the Hamiltonian equations. We underline the contextures of these models, and the process to build these generic structures. We also introduce a more synthetic Koszul definition of Fisher Metric, based on the Souriau model, that we name Souriau-Fisher metric. This Lie groups thermodynamics is the bedrock for Lie group machine learning providing a full covariant maximum entropy Gibbs density based on representation theory (symplectic structure of coadjoint orbits for Souriau non-equivariant model associated to a class of co-homology).



2020 ◽  
Vol 23 (3) ◽  
pp. S25-S58 ◽  
Author(s):  
Jeppe Druedahl ◽  
Anders Munk-Nielsen

Summary We propose a novel method for modelling income processes using machine learning. Our method links age-specific regression trees, and returns a discrete state process, which can easily be included in consumption-saving models without further discretizations. A central advantage of our approach is that it does not rely on any parametric assumptions, and because we build on existing machine learning tools it is furthermore easy to apply in practice. Using a 30-year panel of Danish males, we document rich higher-order income dynamics, including substantial skewness and high kurtosis of income levels and growth rates. We also find important changes in income risk over the life-cycle and the income distribution. Our estimated process matches these dynamics closely. Using a consumption-saving model, the implied welfare cost of income risk is more than 10% of income.



Author(s):  
C. Parroni ◽  
E. Tollet ◽  
V. F. Cardone ◽  
R. Maoli ◽  
R. Scaramella


Sign in / Sign up

Export Citation Format

Share Document