UNDERSTANDING FEATURE DISCOVERY IN WEBSITE FINGERPRINTING ATTACKS

Author(s):  
Nate Mathews ◽  
Payap Sirinam ◽  
Matthew Wright
Keyword(s):  
2021 ◽  
Vol 48 ◽  
pp. 101261
Author(s):  
Gokula Vasantha ◽  
David Purves ◽  
John Quigley ◽  
Jonathan Corney ◽  
Andrew Sherlock ◽  
...  

2009 ◽  
Vol 25 (11) ◽  
pp. 1461-1462 ◽  
Author(s):  
Xuefeng Bruce Ling ◽  
Harvey Cohen ◽  
Joseph Jin ◽  
Irwin Lau ◽  
James Schilling

2021 ◽  
Author(s):  
David Leong

<div> <div> <div> <p>Entrepreneurship concerns actions under uncertainties. Situated within that uncertainties are opportunities that entrepreneurs seek. How are these opportunities seen? Within the entrepreneurial opportunities are seeds with potentialities. Potentialities for profits. They are the reasons that entrepreneurs act up to exploit and to set in motion the entrepreneurial emergence. The intentionality follows with construction of a coherent set of activities or incoherent intuitive moves to pursue the opportunity, including injecting resources and mobilizing social and material networks. How are opportunities discovered, and perceived? The current academic debates feature discovery and creation. Are they existing independently, with pre-existing reality, even without being observed? Or as some argued that opportunities are not pre-existing in space and time with an objective existence but are subjectively and socially constructed. On contact with such opportunities, what spur entrepreneurs to act and what are the forces at work? Are they real or artificial? Can they be holographic representation and provide cues and signals to entrepreneurs to act? Can opportunity-as-hologram explains how entrepreneurs get inspired and motivated to pursuing the opportunities? </p> <p>This paper will explore, revisit and recast perspectives on opportunities and addressing the subtle conceptual issues at the core of entrepreneurship theories that hold the two views, discovery and creation of opportunities to be both valid and mutually non-exclusive, on holographic terms. In the discussion, this paper will explore implicate order and explicate order which are quantum theory concepts theorized by physicist David Bohm as these theories were developed to explain the bizarre and unpredictable behaviours of subatomic particles, which have strong semblance to the same free-spiritedness and free-will self-organization behaviours of entrepreneurs. </p> <p>Our theorization will have implications for entrepreneurs and entrepreneurial researches relating to quantum science references. </p> </div> </div> </div>


Author(s):  
Céline Hocquette ◽  
Stephen H. Muggleton

Predicate Invention in Meta-Interpretive Learning (MIL) is generally based on a top-down approach, and the search for a consistent hypothesis is carried out starting from the positive examples as goals. We consider augmenting top-down MIL systems with a bottom-up step during which the background knowledge is generalised with an extension of the immediate consequence operator for second-order logic programs. This new method provides a way to perform extensive predicate invention useful for feature discovery. We demonstrate this method is complete with respect to a fragment of dyadic datalog. We theoretically prove this method reduces the number of clauses to be learned for the top-down learner, which in turn can reduce the sample complexity. We formalise an equivalence relation for predicates which is used to eliminate redundant predicates. Our experimental results suggest pairing the state-of-the-art MIL system Metagol with an initial bottom-up step can significantly improve learning performance.


2020 ◽  
Vol 32 (5) ◽  
pp. 1018-1032 ◽  
Author(s):  
Noah Frazier-Logue ◽  
Stephen José Hanson

Multilayer neural networks have led to remarkable performance on many kinds of benchmark tasks in text, speech, and image processing. Nonlinear parameter estimation in hierarchical models is known to be subject to overfitting and misspecification. One approach to these estimation and related problems (e.g., saddle points, colinearity, feature discovery) is called Dropout. The Dropout algorithm removes hidden units according to a binomial random variable with probability [Formula: see text] prior to each update, creating random “shocks” to the network that are averaged over updates (thus creating weight sharing). In this letter, we reestablish an older parameter search method and show that Dropout is a special case of this more general model, stochastic delta rule (SDR), published originally in 1990. Unlike Dropout, SDR redefines each weight in the network as a random variable with mean [Formula: see text] and standard deviation [Formula: see text]. Each weight random variable is sampled on each forward activation, consequently creating an exponential number of potential networks with shared weights (accumulated in the mean values). Both parameters are updated according to prediction error, thus resulting in weight noise injections that reflect a local history of prediction error and local model averaging. SDR therefore implements a more sensitive local gradient-dependent simulated annealing per weight converging in the limit to a Bayes optimal network. We run tests on standard benchmarks (CIFAR and ImageNet) using a modified version of DenseNet and show that SDR outperforms standard Dropout in top-5 validation error by approximately 13% with DenseNet-BC 121 on ImageNet and find various validation error improvements in smaller networks. We also show that SDR reaches the same accuracy that Dropout attains in 100 epochs in as few as 40 epochs, as well as improvements in training error by as much as 80%.


Author(s):  
Zhung-Xun Liao ◽  
Shou-Chung Li ◽  
Wen-Chih Peng ◽  
Philip S. Yu ◽  
Te-Chuan Liu
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document