Effectiveness of the One-Minute Preceptor Model for Diagnosing the Patient and the Learner: Proof of Concept

2004 ◽  
Vol 79 (1) ◽  
pp. 42-49 ◽  
Author(s):  
Eva Aagaard ◽  
Arianne Teherani ◽  
David M. Irby
Keyword(s):  
Author(s):  
Giovanni Camurati ◽  
Aurélien Francillon ◽  
François-Xavier Standaert

Recently, some wireless devices have been found vulnerable to a novel class of side-channel attacks, called Screaming Channels. These leaks might appear if the sensitive leaks from the processor are unintentionally broadcast by a radio transmitter placed on the same chip. Previous work focuses on identifying the root causes, and on mounting an attack at a distance considerably larger than the one achievable with conventional electromagnetic side channels, which was demonstrated in the low-noise environment of an anechoic chamber. However, a detailed understanding of the leak, attacks that take full advantage of the novel vector, and security evaluations in more practical scenarios are still missing. In this paper, we conduct a thorough experimental analysis of the peculiar properties of Screaming Channels. For example, we learn about the coexistence of intended and unintended data, the role of distance and other parameters on the strength of the leak, the distortion of the leakmodel, and the portability of the profiles. With such insights, we build better attacks. We profile a device connected via cable with 10000·500 traces. Then, 5 months later, we attack a different instance at 15m in an office environment. We recover the AES-128 key with 5000·1000 traces and key enumeration up to 223. Leveraging spatial diversity, we mount some attacks in the presence of obstacles. As a first example of application to a real system, we show a proof-of-concept attack against the authentication method of Google Eddystone beacons. On the one side, this work lowers the bar for more realistic attacks, highlighting the importance of the novel attack vector. On the other side, it provides a broader security evaluation of the leaks, helping the defender and radio designers to evaluate risk, and the need of countermeasures.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Charlotte Martial ◽  
Armand Mensen ◽  
Vanessa Charland-Verville ◽  
Audrey Vanhaudenhuyse ◽  
Daniel Rentmeister ◽  
...  

Abstract The neurobiological basis of near-death experiences (NDEs) is unknown, but a few studies attempted to investigate it by reproducing in laboratory settings phenomenological experiences that seem to closely resemble NDEs. So far, no study has induced NDE-like features via hypnotic modulation while simultaneously measuring changes in brain activity using high-density EEG. Five volunteers who previously had experienced a pleasant NDE were invited to re-experience the NDE memory and another pleasant autobiographical memory (dating to the same time period), in normal consciousness and with hypnosis. We compared the hypnosis-induced subjective experience with the one of the genuine experience memory. Continuous high-density EEG was recorded throughout. At a phenomenological level, we succeeded in recreating NDE-like features without any adverse effects. Absorption and dissociation levels were reported as higher during all hypnosis conditions as compared to normal consciousness conditions, suggesting that our hypnosis-based protocol increased the felt subjective experience in the recall of both memories. The recall of a NDE phenomenology was related to an increase of alpha activity in frontal and posterior regions. This study provides a proof-of-concept methodology for studying the phenomenon, enabling to prospectively explore the NDE-like features and associated EEG changes in controlled settings.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0256901
Author(s):  
James W. A. Strachan ◽  
Arianna Curioni ◽  
Merryn D. Constable ◽  
Günther Knoblich ◽  
Mathieu Charbonneau

The ability to transmit information between individuals through social learning is a foundational component of cultural evolution. However, how this transmission occurs is still debated. On the one hand, the copying account draws parallels with biological mechanisms for genetic inheritance, arguing that learners copy what they observe and novel variations occur through random copying errors. On the other hand, the reconstruction account claims that, rather than directly copying behaviour, learners reconstruct the information that they believe to be most relevant on the basis of pragmatic inference, environmental and contextual cues. Distinguishing these two accounts empirically is difficult based on data from typical transmission chain studies because the predictions they generate frequently overlap. In this study we present a methodological approach that generates different predictions of these accounts by manipulating the task context between model and learner in a transmission episode. We then report an empirical proof-of-concept that applies this approach. The results show that, when a model introduces context-dependent embedded signals to their actions that are not intended to be transmitted, it is possible to empirically distinguish between competing predictions made by these two accounts. Our approach can therefore serve to understand the underlying cognitive mechanisms at play in cultural transmission and can make important contributions to the debate between preservative and reconstructive schools of thought.


2015 ◽  
Author(s):  
Greg Jensen ◽  
Drew Altschul

In this opinion piece, we outline two shortcomings in experimental design that limit the claims that can be made about concept learning in animals. On the one hand, most studies of concept learning train too few concepts in parallel to support general claims about their capacity of subsequent abstraction. On the other hand, even studies that train many categories of stimulus in parallel only test one or two stimuli at a time, allowing even a simplistic learning rule to succeed by making informed guesses. To demonstrate these shortcomings, we include simulations performed using an off-the-shelf image classifier. These simulations demonstrate that, when either training or testing are overly simplistic, a classification algorithm that is incapable of abstraction nevertheless yields levels of performance that have been described in the literature as proof of concept learning in animals.


2021 ◽  
Author(s):  
An Su ◽  
Ling Wang ◽  
Xinqiao Wang ◽  
Chengyun Zhang ◽  
Hongliang Duan

<div> The study focuses on the proof-of-concept that the human invention of a named reaction can be reproduced by the zero-shot learning version of transformer.</div><div>While state-of-art reaction prediction machine learning models can predict chemical reactions through the transfer learning of thousands of training samples with the same reaction types as the ones to predict, how to prepare the models to predict truly "unseen" reactions remains a question. We aim to equip the transformer model with the ability to predict unseen reactions following the concept of "zero-shot learning". To find what kind of auxiliary information is needed, we reproduce the human invention of the Chan-Lam coupling reaction where the inventor was inspired by two existing reactions---Suzuki reaction and Barton's bismuth arylation reaction. After training with the samples from these two reactions as well as the USPTO dataset, the transformer model can pre-dict the Chan-Lam coupling reaction with 55.7% top-1 accuracy which is a huge im-provement comparing to 17.2% from the model trained with the USPTO dataset only. Our model also mimics the later stage of this history where the initial case of Chan-Lam coupling reaction was generalized to a wide range of reactants and reagents via the "one-shot learning" approach. The results of this study show that having existing reactions as auxiliary information can help the transformer predict unseen reactions and providing just one or few samples of the unseen reaction can boost the model's gener-alization ability.<br></div>


2019 ◽  
Vol 6 (5) ◽  
pp. 955-961 ◽  
Author(s):  
Hongfei Cheng ◽  
Nailiang Yang ◽  
Xiaozhi Liu ◽  
Qinbai Yun ◽  
Min Hao Goh ◽  
...  

ABSTRACT Phase engineering is arising as an attractive strategy to tune the properties and functionalities of nanomaterials. In particular, amorphous/crystalline heterophase nanostructures have exhibited some intriguing properties. Herein, the one-pot wet-chemical synthesis of two types of amorphous/crystalline heterophase PdCu nanosheets is reported, in which one is amorphous phase-dominant and the other one is crystalline phase-dominant. Then the aging process of the synthesized PdCu nanosheets is studied, during which their crystallinity increases, accompanied by changes in some physicochemical properties. As a proof-of-concept application, their aging effect on catalytic hydrogenation of 4-nitrostyrene is investigated. As a result, the amorphous phase-dominant nanosheets initially show excellent chemoselectivity. After aging for 14 days, their catalytic activity is higher than that of crystalline phase-dominant nanosheets. This work demonstrates the intriguing properties of heterophase nanostructures, providing a new platform for future studies on the regulation of functionalities and applications of nanomaterials by phase engineering.


Algorithms ◽  
2020 ◽  
Vol 13 (9) ◽  
pp. 216
Author(s):  
Matteo Ceccarello ◽  
Andrea Pietracaprina ◽  
Geppino Pucci ◽  
Eli Upfal

We present an algorithm for approximating the diameter of massive weighted undirected graphs on distributed platforms supporting a MapReduce-like abstraction. In order to be efficient in terms of both time and space, our algorithm is based on a decomposition strategy which partitions the graph into disjoint clusters of bounded radius. Theoretically, our algorithm uses linear space and yields a polylogarithmic approximation guarantee; most importantly, for a large family of graphs, it features a round complexity asymptotically smaller than the one exhibited by a natural approximation algorithm based on the state-of-the-art Δ-stepping SSSP algorithm, which is its only practical, linear-space competitor in the distributed setting. We complement our theoretical findings with a proof-of-concept experimental analysis on large benchmark graphs, which suggests that our algorithm may attain substantial improvements in terms of running time compared to the aforementioned competitor, while featuring, in practice, a similar approximation ratio.


2015 ◽  
Author(s):  
Greg Jensen ◽  
Drew Altschul

In this opinion piece, we outline two shortcomings in experimental design that limit the claims that can be made about concept learning in animals. On the one hand, most studies of concept learning train too few concepts in parallel to support general claims about their capacity of subsequent abstraction. On the other hand, even studies that train many categories of stimulus in parallel only test one or two stimuli at a time, allowing even a simplistic learning rule to succeed by making informed guesses. To demonstrate these shortcomings, we include simulations performed using an off-the-shelf image classifier. These simulations demonstrate that, when either training or testing are overly simplistic, a classification algorithm that is incapable of abstraction nevertheless yields levels of performance that have been described in the literature as proof of concept learning in animals.


This article proposes an innovative approach fully based on logic to determine the relative positions and orientations of objects in a scene photographed from different points of view as well as those of the cameras used to take the pictures. The proposal is absolutely not based on 2D feature extraction, projective geometry or least squares adjustment but on a logical approach based on an enumeration of simple relationships between the objects visible in the photos. It is an approach imitating a natural and unconscious reasoning that each of us makes by observing a scene: is this object more to the right than this one? And is this other one further away from me than the one who’s partially hiding it from me? It is therefore a question of approaching the problem by identifying and recognizing objects in photographs and not by measuring millions of points in space without having any idea of the object to which they belong. This article presents a ”proof of concept” based on virtual experimentation: in a discrete 3D space, a simple scene, composed of spheres of different colors and cameras, is modelled in a 3D format. In this work the positioning of the spheres and cameras is limited to a plane. Cameras are placed in the scene in order to see the spheres and then for each camera an image is generated. The application reads each image and deducts relationships between object and camera. These relationships based on the visible occlusions between the projections of the objects onto the photographs, are formalized according to Allen’s relationships. A knowledge base is implemented to allow an iterative process of SPARQL queries for qualitative spatial reasoning leading to a set of possible solutions. Finally, the system deduces the relative positions between objects and cameras and the result is imported and can be used within several photogrammetry software suites.


2019 ◽  
Vol 142 (7) ◽  
Author(s):  
E. Thalmann ◽  
M. H. Kahrobaiyan ◽  
I. Vardi ◽  
S. Henein

Abstract The most important property for accurate mechanical time bases is isochronism: the independence of period from oscillation amplitude. This paper develops a new concept in isochronism adjustment for flexure-based watch oscillators. Flexure pivot oscillators, which would advantageously replace the traditional balance wheel-spiral spring oscillator used in mechanical watches due to their significantly lower friction, exhibit nonlinear elastic properties that introduce an isochronism defect. Rather than minimizing this defect, we are interested in controlling it to compensate for external defects such as the one introduced by escapements. We show that this can be done by deriving a formula that expresses the change of frequency of the oscillator with amplitude, i.e., isochronism defect, caused by elastic nonlinearity. To adjust the isochronism, we present a new method that takes advantage of the second-order parasitic motion of flexures and embody it in a new architecture we call the co-RCC flexure pivot oscillator. In this realization, the isochronism defect of the oscillator is controlled by adjusting the stiffness of parallel flexures before fabrication through their length Lp, which has no effect on any other crucial property, including nominal frequency. We show that this method is also compatible with post-fabrication tuning by laser ablation. The advantage of our design is that isochronism tuning is an intrinsic part of the oscillator, whereas previous isochronism correctors were mechanisms added to the oscillator. The results of our previous research are also implemented in this mechanism to achieve gravity insensitivity, which is an essential property for mechanical watch time bases. We derive analytical models for the isochronism and gravity sensitivity of the oscillator and validate them by finite element simulation. We give an example of dimensioning this oscillator to reach typical practical watch specifications and show that we can tune the isochronism defect with a resolution of 1 s/day within an operating range of 10% of amplitude. We present a mock-up of the oscillator serving as a preliminary proof-of-concept.


Sign in / Sign up

Export Citation Format

Share Document