rigorous test
Recently Published Documents


TOTAL DOCUMENTS

89
(FIVE YEARS 16)

H-INDEX

17
(FIVE YEARS 2)

Author(s):  
Steven M. Swift ◽  
Karen Sauve ◽  
Cara Cassino ◽  
Raymond Schuch

Exebacase (CF-301) is a novel antistaphylococcal lysin (cell wall hydrolase) in Phase 3 of clinical development for the treatment of Staphylococcus aureus bacteremia including right-sided endocarditis used in addition to standard of care antibiotics. In the current study, the potential for exebacase to treat S. aureus pneumonia was explored in vitro using bovine pulmonary surfactant (Survanta®) and in vivo using a lethal murine pneumonia model. Exebacase was active against a set of methicillin-sensitive and methicillin-resistant S. aureus (MSSA and MRSA, respectively), with an MIC 90 of 2 μg/mL (n=18 strains), in the presence of a surfactant concentration (7.5%) inhibitory to the antistaphylococcal antibiotic, daptomycin, which is inactive in pulmonary environments due to specific inhibition by surfactant. In a rigorous test of the ability of exebacase to synergize with antistaphylococcal antibiotics, exebacase synergized with daptomycin in the presence of surfactant in vitro, resulting in daptomycin MIC reductions up to 64-fold against 9 MRSA and 9 MSSA strains. Exebacase was also observed to facilitate binding of daptomycin to S. aureus and elimination of biofilm-like structures formed in the presence of surfactant. Exebacase (5 mg/kg, q24d, administered intravenously for 3 days) was efficacious in a murine model of staphylococcal pneumonia, resulting in 50% survival compared to 0% survival in vehicle control; exebacase in addition to daptomycin (50 mg/kg, q24d, 3 days) resulted in 70% survival, compared to 0% survival in the daptomycin alone control. Overall, exebacase in active in pulmonary environments and may be appropriate for development as a treatment for staphylococcal pneumonia.


2021 ◽  
Vol 140 ◽  
pp. 106471
Author(s):  
Pauline P. Kruiver ◽  
Ger de Lange ◽  
Fred Kloosterman ◽  
Mandy Korff ◽  
Jan van Elk ◽  
...  

2020 ◽  
Author(s):  
Violet Aurora Brown ◽  
Naseem Dillman-Hasso ◽  
ZHAOBIN LI ◽  
Lucia Ray ◽  
Ellen Mamantov ◽  
...  

The linguistic similarity hypothesis states that it is more difficult to segregate target and masker speech when they are linguistically similar (Brouwer et al., 2012). This may be the result of energetic masking (interference at the auditory periphery) and/or informational masking (cognitive interference). To provide a rigorous test of the hypothesis and investigate how informational masking interferes with speech identification in the absence of energetic masking, we presented target speech visually and masking babble auditorily. Participants completed an English lipreading task in silence, speech-shaped noise, semantically anomalous English, semantically meaningful English, Dutch, and Mandarin two-talker babble. Results showed that speech maskers interfere with lipreading more than stationary noise, and that maskers that are the same language as the target speech provide more interference than different-language maskers. However, the study found no evidence that a masker that is similar to the English target speech (Dutch) provides more masking than one that is less similar (Mandarin). These results provide some cross-modal support for the linguistic similarity hypothesis, but suggest that the theory should be further specified to address the conditions under which languages that differ in their similarity to the target speech should provide different levels of masking.


2020 ◽  
Author(s):  
Alexander O. Crenshaw ◽  
Karena Leo ◽  
Andrew Christensen ◽  
Jasara Hogan ◽  
Katherine Baucom ◽  
...  

Researchers commonly employ observational methods, in which partners discuss topics of concern to them, to test gender differences and other within-couple differences in couple conflict behavior. We describe a previously-unidentified assumption upon which statistical tests in these observational studies are frequently reliant: whether each partner is more concerned or dissatisfied with the topic selected for them than the partner is. We term this the relative importance assumption and show that common procedures for selecting conflict discussion topics can lead to widespread violations of the assumption in empirical studies. Study 1 conducts a systematic review of the literature and finds that few existing studies ensure relative importance is met. Study 2 uses two empirical samples to estimate how often relative importance is violated when not ensured, finding it is violated in one-third of interaction tasks. Study 3 examines the potential consequences of violating the relative importance assumption when testing within-couple differences in observed behavior, focusing on gender differences in the demand/withdraw pattern. Results show that these tests were profoundly impacted by violations of relative importance. In light of these violations, we conduct a more rigorous test of demand/withdraw theories and clarify previously-inconsistent results in the literature. We recommend explicit consideration of relative importance for studies testing within-couple effects, provide methodological recommendations for selecting topics in future studies, and discuss implications for clinical practice.


2020 ◽  
Author(s):  
Robert Calin-Jageman ◽  
Irina Calin-Jageman ◽  
Tania Rosiles ◽  
Melissa Nguyen ◽  
Annette Garcia ◽  
...  

[[This is a Stage 2 Registered Report manuscript now accepted for publication at eNeuro. The accepted Stage 1 manuscript is posted here: https://psyarxiv.com/s7dft, and the pre-registration for the project is available here (https://osf.io/fqh8j, 9/11/2019). A link to the final Stage 2 manuscript will be posted after peer review and publication.]] There is fundamental debate about the nature of forgetting: some have argued that it represents the decay of the memory trace, others that the memory trace persists but becomes inaccessible due to retrieval failure. These different accounts of forgetting lead to different predictions about savings memory, the rapid re-learning of seemingly forgotten information. If forgetting is due to decay, then savings requires re-encoding and should thus involve the same mechanisms as initial learning. If forgetting is due to retrieval failure, then savings should be mechanistically distinct from encoding. In this registered report we conducted a pre-registered and rigorous test between these accounts of forgetting. Specifically, we used microarray to characterize the transcriptional correlates of a new memory (1 day after training), a forgotten memory (8 days after training), and a savings memory (8 days after training but with a reminder on day 7 to evoke a long-term savings memory) for sensitization in Aplysia californica (n = 8 samples/group). We found that the re-activation of sensitization during savings does not involve a substantial transcriptional response. Thus, savings is transcriptionally distinct relative to a newer (1-day old) memory, with no co-regulated transcripts, negligible similarity in regulation-ranked ordering of transcripts, and a negligible correlation in training-induced changes in gene expression (r = .04 95% CI [-.12, .20]). Overall, our results suggest that forgetting of sensitization memory represents retrieval failure.


2020 ◽  
Author(s):  
Robert Calin-Jageman ◽  
Irina Calin-Jageman ◽  
Tania Rosiles ◽  
Melissa Nguyen ◽  
Annette Garcia ◽  
...  

[[This is a Stage 1 Registered Report manuscript. The project was submitted for review to eNeuro. Upon revision and acceptance, this version of the manuscript was pre-registered on the OSF (9/11/2019, https://osf.io/fqh8j) (but due to an oversight not posted as a preprint until July 2020). A Stage 2 manuscript is now posted as a pre-print (https://psyarxiv.com/h59jv) and is under review at eNeuro. A link to the final Stage 2 manuscript will be added when available.]]There is fundamental debate about the nature of forgetting: some have argued that it represents the decay of the memory trace, others that the memory trace persists but becomes inaccessible due to retrieval failure. These different accounts of forgetting make different predictions about savings memory, the rapid re-learning of seemingly forgotten information. If forgetting is due to decay then savings requires re-encoding and should thus involve the same mechanisms as initial learning. If forgetting is due to retrieval-failure then savings should be mechanistically distinct from encoding. In this registered report we conducted a pre-registered and rigorous test between these accounts of forgetting. Specifically, we used microarray to characterize the transcriptional correlates of a new memory (1 day from training), a forgotten memory (8 days from training), and a savings memory (8 days from training but with a reminder on day 7 to evoke a long-term savings memory) for sensitization in Aplysia californica (n = 8 samples/group). We find that the transcriptional correlates of savings are [highly similar / somewhat similar / unique] relative to new (1-day-old) memories. Specifically, savings memory and a new memory share [X] of [Y] regulated transcripts, show [strong / moderate / weak] similarity in sets of regulated transcripts, and show [r] correlation in regulated gene expression, which is [substantially / somewhat / not at all] stronger than at forgetting. Overall, our results suggest that forgetting represents [decay / retrieval-failure / mixed mechanisms].


2020 ◽  
Author(s):  
N Schmitt ◽  
Paul Nation ◽  
B Kremmel

Copyright © Cambridge University Press 2019. Recently, a large number of vocabulary tests have been made available to language teachers, testers, and researchers. Unfortunately, most of them have been launched with inadequate validation evidence. The field of language testing has become increasingly more rigorous in the area of test validation, but developers of vocabulary tests have generally not given validation sufficient attention in the past. This paper argues for more rigorous and systematic procedures for test development, starting from a more precise specification of the test's purpose, intended testees and educational context, the particular aspects of vocabulary knowledge which are being measured, and the way in which the test scores should be interpreted. It also calls for greater assessment literacy among vocabulary test developers, and greater support for the end users of the tests, for instance, with the provision of detailed users' manuals. Overall, the authors present what they feel are the minimum requirements for vocabulary test development and validation. They argue that the field should self-police itself more rigorously to ensure that these requirements are met or exceeded, and made explicit for those using vocabulary tests.


2020 ◽  
Author(s):  
N Schmitt ◽  
Paul Nation ◽  
B Kremmel

Copyright © Cambridge University Press 2019. Recently, a large number of vocabulary tests have been made available to language teachers, testers, and researchers. Unfortunately, most of them have been launched with inadequate validation evidence. The field of language testing has become increasingly more rigorous in the area of test validation, but developers of vocabulary tests have generally not given validation sufficient attention in the past. This paper argues for more rigorous and systematic procedures for test development, starting from a more precise specification of the test's purpose, intended testees and educational context, the particular aspects of vocabulary knowledge which are being measured, and the way in which the test scores should be interpreted. It also calls for greater assessment literacy among vocabulary test developers, and greater support for the end users of the tests, for instance, with the provision of detailed users' manuals. Overall, the authors present what they feel are the minimum requirements for vocabulary test development and validation. They argue that the field should self-police itself more rigorously to ensure that these requirements are met or exceeded, and made explicit for those using vocabulary tests.


2020 ◽  
Vol 8 (1) ◽  
pp. 6-18 ◽  
Author(s):  
Hermann Schmitt ◽  
Alberto Sanz ◽  
Daniela Braun ◽  
Eftichia Teperoglou

The second-order election (SOE) model as originally formulated by Reif and Schmitt (1980) suggests that, relative to the preceding first-order election result, turnout is lower in SOEs, government and big parties lose, and small and ideologically extreme parties win. These regularities are not static but dynamic and related to the first-order electoral cycle. These predictions of the SOE model have often been tested using aggregate data. The fact that they are based on individual-level hypotheses has received less attention. The main aim of this article is to restate the micro-level hypotheses for the SOE model and run a rigorous test for the 2004 and 2014 European elections. Using data from the European Election Studies voter surveys, our analysis reveals signs of sincere, but also strategic abstentions in European Parliament elections. Both strategic and sincere motivations are also leading to SOE defection. It all happens at once.


Sign in / Sign up

Export Citation Format

Share Document