baseline experiment
Recently Published Documents


TOTAL DOCUMENTS

49
(FIVE YEARS 6)

H-INDEX

12
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Rebecca Peretz-Lange ◽  
Paul Muentener

Children hold rich essentialist beliefs about natural and social categories, representing them as discrete (mutually exclusive with sharp boundaries) and stable (with membership remaining constant over an individual’s lifespan). Children use essential categories to make inductive inferences about individuals. How do children determine what categories to consider essential and to use as an inductive base? Although much research has demonstrated children’s use of labels to form categories, here we explore whether children might also use the observed discreteness or stability of a trait to form categories based on that trait. In the present study, we taught children about novel creatures and provided them with a cue (discreteness, stability, labels, or no cue) to form texture categories rather than shape or color categories. Experiment 1 found that children (4–6 years, n = 140) used labels but not discreteness or stability cues to form texture categories more often than at baseline. Experiment 2 (5–6 years, n = 152) found that children who later recognized the stability and discreteness cues used them to form categories more often than those who did not later recognize the cues, but were still overall less likely to use these cues than to use labels cues. Results underscore the unique importance of labels as a cue for category formation and suggest that children do not readily rely on the stability and discreteness of a trait to form animate categories despite readily inferring that such categories are stable and discrete. Implications for natural and social category representations are discussed.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Joël Legrand ◽  
Romain Gogdemir ◽  
Cédric Bousquet ◽  
Kevin Dalleau ◽  
Marie-Dominique Devignes ◽  
...  

AbstractPharmacogenomics (PGx) studies how individual gene variations impact drug response phenotypes, which makes PGx-related knowledge a key component towards precision medicine. A significant part of the state-of-the-art knowledge in PGx is accumulated in scientific publications, where it is hardly reusable by humans or software. Natural language processing techniques have been developed to guide experts who curate this amount of knowledge. But existing works are limited by the absence of a high quality annotated corpus focusing on PGx domain. In particular, this absence restricts the use of supervised machine learning. This article introduces PGxCorpus, a manually annotated corpus, designed to fill this gap and to enable the automatic extraction of PGx relationships from text. It comprises 945 sentences from 911 PubMed abstracts, annotated with PGx entities of interest (mainly gene variations, genes, drugs and phenotypes), and relationships between those. In this article, we present the corpus itself, its construction and a baseline experiment that illustrates how it may be leveraged to synthesize and summarize PGx knowledge.


2019 ◽  
Vol 2019 ◽  
pp. 1-13
Author(s):  
Surender Verma ◽  
Shankita Bhardwaj

The future long baseline experiments such as DUNE and T2HKK have promising prospects to determine the neutrino mass hierarchy and measuring standard CP phase δ. However, presence of possible nonstandard interactions of neutrinos with matter may intricate this picture and is the subject matter of the present work. We have studied the standard parameter degeneracies in presence of nonstandard interactions (NSI) with DUNE and T2HKK experiments. We examine the mass hierarchy degeneracy assuming (i) all NSI parameters to be nonzero and (ii) one NSI parameter (ϵeμ) and its corresponding CP phase (δeμ) to be nonzero. We find that the latter case is more appropriate to resolve mass hierarchy degeneracy with DUNE and T2HKK experiments due to relatively small uncertainties emanating from the NSI sector. We have, also, investigated the octant degeneracy with neutrino (νμ→νe) and antineutrino (ν¯μ→ν¯e) mode separately. We find that to resolve this degeneracy the long baseline experiment with combination of neutrino and antineutrino mode is essential. Furthermore, we have considered DUNE in conjunction with T2HKK experiment to study CP phase degeneracy due to standard (δ) and nonstandard (δeμ) CP phases. We find that DUNE and T2HKK, in conjunction, have more sensitivity for CP violation effects (10σ for true NH and 8.2σ for true IH).


2019 ◽  
Author(s):  
N. Agafonova ◽  
A. Alexandrov ◽  
A. Anokhina ◽  
S. Aoki ◽  
A. Ariga ◽  
...  

OPERA is a long-baseline experiment designed to search for \nu_{\mu}\to\nu_{\tau}νμ→ντ oscillations in appearance mode. It was based at the INFN Gran Sasso laboratory (LNGS) and took data from 2008 to 2012 with the CNGS neutrino beam from CERN. After the discovery of \nu_\tauντ appearance in 2015, with 5.1\sigma5.1σ significance, the criteria to select \nu_\tauντ candidates have been extended and a multivariate approach has been used for events identification. In this way the statistical uncertainty in the measurement of the oscillation parameters and of \nu_\tauντ properties has been improved. Results are reported.


2019 ◽  
Vol 207 ◽  
pp. 04002
Author(s):  
V. Garkusha ◽  
S. Ivanov ◽  
A. Maksimov ◽  
F. Novoskoltsev ◽  
Y. Pimbursky ◽  
...  

The report overviews preliminary results on the feasibility to produce a neutrino beam based on the U-70 proton synchrotron at Protvino for a very long baseline experiment with the deep water ORCA detector in the Mediterranean Sea.


2018 ◽  
Author(s):  
Alessandra S. Souza ◽  
Klaus Oberauer

Articulatory rehearsal is assumed to benefit verbal working memory. Yet, there is no experimental evidence supporting a causal link between rehearsal and serial-order memory, which is one of the hallmarks of working memory functioning. Across four experiments, we tested the hypothesis that rehearsal improves working memory by asking participants to rehearse overtly and by instructing different rehearsal schedules. In Experiments 1a, 1b, and 2, we compared an instructed cumulative-rehearsal condition against a free-rehearsal condition. The instruction increased the prevalence of cumulative rehearsal, but recall performance remained unchanged or decreased compared to the free-rehearsal baseline. Experiment 2 also tested the impact of a fixed rehearsal instruction; this condition yielded substantial performance costs compared to the baseline. Experiment 3 tested whether rehearsals (according to an experimenter-controlled protocol) are beneficial compared to a matched articulatory suppression condition that blocked rehearsals of the memoranda. Again, rehearsing the memoranda yielded no benefit compared to articulatory suppression. In sum, our results are incompatible with the notion that rehearsal is beneficial to working memory.


2018 ◽  
Vol 174 ◽  
pp. 01009
Author(s):  
Thorsten Lux

For the next generation of long baseline neutrino oscillation experiments detectors with several tens of ktons of detection medium will be needed. Liquid argon (LAr) is ideal for this application since it allows the construction of fully sensitive time projection chambers (TPCs). A mandatory step between the current setups and the final detector for the long baseline experiment is the construction of a large prototype to prove the technical feasibility of the scaling. WA105 with a size of 6×6×6 m3 is this intermediate step for the detector concept of a LAr double phase TPC and will be presented in these proceedings.


2017 ◽  
Vol 42 (3) ◽  
pp. 261-292 ◽  
Author(s):  
Nestar Russell

After Stanley Milgram published his first official Obedience to Authority baseline experiment, some scholars drew parallels between his findings and the Holocaust. These comparisons are now termed the Milgram-Holocaust linkage. However, because the Obedience studies have been shown to differ in many ways from the Holocaust’s finer historical details, more recent literature has challenged the linkage. In this article I argue that the Obedience studies and the Holocaust share two commonalities that are so significant that they may negate the importance others have attributed to the differences. These commonalities are (1) an end-goal of maximising “ordinary” people’s participation in harm infliction and (2) a reliance on Weberian formal rational techniques of discovery to achieve this end-goal. Using documents obtained from Milgram’s personal archive at Yale University, this article reveals the means-to-end learning processes Milgram utilised during his pilot studies in order to maximise ordinary people’s participation in harm-infliction in his official baseline experiment. This article then illustrates how certain Nazi innovators relied on the same techniques of discovery during the invention of the Holocaust, more specifically the so-called Holocaust by bullets. In effect, during both the Obedience studies and the Holocaust processes were developed that made, in each case, the undoable doable.


Sign in / Sign up

Export Citation Format

Share Document