scholarly journals Machine learning of higher-order programs

1994 ◽  
Vol 59 (2) ◽  
pp. 486-500 ◽  
Author(s):  
Ganesh Baliga ◽  
John Case ◽  
Sanjay Jain ◽  
Mandayam Suraj

AbstractA generator program for a computable function (by definition) generates an infinite sequence of programs all but finitely many of which compute that function. Machine learning of generator programs for computable functions is studied. To motivate these studies partially, it is shown that, in some cases, interesting global properties for computable functions can be proved from suitable generator programs which cannot be proved from any ordinary programs for them. The power (for variants of various learning criteria from the literature) of learning generator programs is compared with the power of learning ordinary programs. The learning power in these cases is also compared to that of learning limiting programs, i.e., programs allowed finitely many mind changes about their correct outputs.

1992 ◽  
Vol 03 (01) ◽  
pp. 93-115 ◽  
Author(s):  
JOHN CASE ◽  
SANJAY JAIN ◽  
ARUN SHARMA

Machine learning of limit programs (i.e., programs allowed finitely many mind changes about their legitimate outputs) for computable functions is studied. Learning of iterated limit programs is also studied. To partially motivate these studies, it is shown that, in some cases, interesting global properties of computable functions can be proved from suitable (n+1)-iterated limit programs for them which can not be proved from any n-iterated limit programs for them. It is shown that learning power is increased when (n+1)-iterated limit programs rather than n-iterated limit programs are to be learned. Many trade-off results are obtained regarding learning power, number (possibly zero) of limits taken, program size constraints and information, and number of errors tolerated in final programs learned.


2019 ◽  
Vol 20 (1) ◽  
pp. 221-256 ◽  
Author(s):  
Helen Nissenbaum

Abstract According to the theory of contextual integrity (CI), privacy norms prescribe information flows with reference to five parameters — sender, recipient, subject, information type, and transmission principle. Because privacy is grasped contextually (e.g., health, education, civic life, etc.), the values of these parameters range over contextually meaningful ontologies — of information types (or topics) and actors (subjects, senders, and recipients), in contextually defined capacities. As an alternative to predominant approaches to privacy, which were ineffective against novel information practices enabled by IT, CI was able both to pinpoint sources of disruption and provide grounds for either accepting or rejecting them. Mounting challenges from a burgeoning array of networked, sensor-enabled devices (IoT) and data-ravenous machine learning systems, similar in form though magnified in scope, call for renewed attention to theory. This Article introduces the metaphor of a data (food) chain to capture the nature of these challenges. With motion up the chain, where data of higher order is inferred from lower-order data, the crucial question is whether privacy norms governing lower-order data are sufficient for the inferred higher-order data. While CI has a response to this question, a greater challenge comes from data primitives, such as digital impulses of mouse clicks, motion detectors, and bare GPS coordinates, because they appear to have no meaning. Absent a semantics, they escape CI’s privacy norms entirely.


2020 ◽  
Vol 25 (3) ◽  
pp. 58
Author(s):  
Minh Nguyen ◽  
Mehmet Aktas ◽  
Esra Akbas

The growth of social media in recent years has contributed to an ever-increasing network of user data in every aspect of life. This volume of generated data is becoming a vital asset for the growth of companies and organizations as a powerful tool to gain insights and make crucial decisions. However, data is not always reliable, since primarily, it can be manipulated and disseminated from unreliable sources. In the field of social network analysis, this problem can be tackled by implementing machine learning models that can learn to classify between humans and bots, which are mostly harmful computer programs exploited to shape public opinions and circulate false information on social media. In this paper, we propose a novel topological feature extraction method for bot detection on social networks. We first create weighted ego networks of each user. We then encode the higher-order topological features of ego networks using persistent homology. Finally, we use these extracted features to train a machine learning model and use that model to classify users as bot vs. human. Our experimental results suggest that using the higher-order topological features coming from persistent homology is promising in bot detection and more effective than using classical graph-theoretic structural features.


Author(s):  
Ganesh Baliga ◽  
John Case ◽  
Sanjay Jain ◽  
Mandayam Suraj

1975 ◽  
Vol 56 ◽  
pp. 29-44 ◽  
Author(s):  
Luis A. Cordero

In this paper we describe a canonical procedure for constructing the extension of a G-foliation on a differentiable manifold X to its tangent bundles of higher order and by applying the Bott-Haefliger’s construction of characteristic classes of G-foliations ([2], [3]) we obtain an infinite sequence of characteristic classes for those foliations (Theorem 4.8).


Cortex ◽  
2019 ◽  
Vol 121 ◽  
pp. 308-321 ◽  
Author(s):  
Christoph Sperber ◽  
Daniel Wiesen ◽  
Georg Goldenberg ◽  
Hans-Otto Karnath

2020 ◽  
Vol 34 (04) ◽  
pp. 4527-4534
Author(s):  
Sören Laue ◽  
Matthias Mitterreiter ◽  
Joachim Giesen

Computing derivatives of tensor expressions, also known as tensor calculus, is a fundamental task in machine learning. A key concern is the efficiency of evaluating the expressions and their derivatives that hinges on the representation of these expressions. Recently, an algorithm for computing higher order derivatives of tensor expressions like Jacobians or Hessians has been introduced that is a few orders of magnitude faster than previous state-of-the-art approaches. Unfortunately, the approach is based on Ricci notation and hence cannot be incorporated into automatic differentiation frameworks like TensorFlow, PyTorch, autograd, or JAX that use the simpler Einstein notation. This leaves two options, to either change the underlying tensor representation in these frameworks or to develop a new, provably correct algorithm based on Einstein notation. Obviously, the first option is impractical. Hence, we pursue the second option. Here, we show that using Ricci notation is not necessary for an efficient tensor calculus and develop an equally efficient method for the simpler Einstein notation. It turns out that turning to Einstein notation enables further improvements that lead to even better efficiency.


Entropy ◽  
2018 ◽  
Vol 20 (11) ◽  
pp. 840 ◽  
Author(s):  
Frédéric Barbaresco

We introduce poly-symplectic extension of Souriau Lie groups thermodynamics based on higher-order model of statistical physics introduced by Ingarden. This extended model could be used for small data analytics and machine learning on Lie groups. Souriau geometric theory of heat is well adapted to describe density of probability (maximum entropy Gibbs density) of data living on groups or on homogeneous manifolds. For small data analytics (rarified gases, sparse statistical surveys, …), the density of maximum entropy should consider higher order moments constraints (Gibbs density is not only defined by first moment but fluctuations request 2nd order and higher moments) as introduced by Ingarden. We use a poly-sympletic model introduced by Christian Günther, replacing the symplectic form by a vector-valued form. The poly-symplectic approach generalizes the Noether theorem, the existence of moment mappings, the Lie algebra structure of the space of currents, the (non-)equivariant cohomology and the classification of G-homogeneous systems. The formalism is covariant, i.e., no special coordinates or coordinate systems on the parameter space are used to construct the Hamiltonian equations. We underline the contextures of these models, and the process to build these generic structures. We also introduce a more synthetic Koszul definition of Fisher Metric, based on the Souriau model, that we name Souriau-Fisher metric. This Lie groups thermodynamics is the bedrock for Lie group machine learning providing a full covariant maximum entropy Gibbs density based on representation theory (symplectic structure of coadjoint orbits for Souriau non-equivariant model associated to a class of co-homology).


2014 ◽  
Vol 14 (01) ◽  
pp. 1450004 ◽  
Author(s):  
Laurent Bienvenu ◽  
Rupert Hölzl ◽  
Joseph S. Miller ◽  
André Nies

We consider effective versions of two classical theorems, the Lebesgue density theorem and the Denjoy–Young–Saks theorem. For the first, we show that a Martin-Löf random real z ∈ [0, 1] is Turing incomplete if and only if every effectively closed class 𝒞 ⊆ [0, 1] containing z has positive density at z. Under the stronger assumption that z is not LR-hard, we show that every such class has density one at z. These results have since been applied to solve two open problems on the interaction between the Turing degrees of Martin-Löf random reals and K-trivial sets: the noncupping and covering problems. We say that f : [0, 1] → ℝ satisfies the Denjoy alternative at z ∈ [0, 1] if either the derivative f′(z) exists, or the upper and lower derivatives at z are +∞ and -∞, respectively. The Denjoy–Young–Saks theorem states that every function f : [0, 1] → ℝ satisfies the Denjoy alternative at almost every z ∈ [0, 1]. We answer a question posed by Kučera in 2004 by showing that a real z is computably random if and only if every computable function f satisfies the Denjoy alternative at z. For Markov computable functions, which are only defined on computable reals, we can formulate the Denjoy alternative using pseudo-derivatives. Call a real zDA-random if every Markov computable function satisfies the Denjoy alternative at z. We considerably strengthen a result of Demuth (Comment. Math. Univ. Carolin.24(3) (1983) 391–406) by showing that every Turing incomplete Martin-Löf random real is DA-random. The proof involves the notion of nonporosity, a variant of density, which is the bridge between the two themes of this paper. We finish by showing that DA-randomness is incomparable with Martin-Löf randomness.


Sign in / Sign up

Export Citation Format

Share Document