scholarly journals Patient Expectations of Assigned Treatments Impact Strength of Randomised Control Trials

2021 ◽  
Vol 8 ◽  
Author(s):  
Roberto Truzoli ◽  
Phil Reed ◽  
Lisa A. Osborne

Patient engagement with treatments potentially poses problems for interpreting the results and meaning of Randomised Control Trials (RCTs). If patients are assigned to treatments that do, or do not, match their expectations, and this impacts their motivation to engage with that treatment, it will affect the distribution of outcomes. In turn, this will impact the obtained power and error rates of RCTs. Simple Monto Carlo simulations demonstrate that these patient variables affect sample variance, and sample kurtosis. These effects reduce the power of RCTs, and may lead to false negatives, even when the randomisation process works, and equally distributes those with positive and negative views about a treatment to a trial arm.

Genes ◽  
2021 ◽  
Vol 12 (12) ◽  
pp. 1847
Author(s):  
Jie Xia ◽  
Lequn Wang ◽  
Guijun Zhang ◽  
ZuoChun Man ◽  
Luonan Chen

Rapid advances in single-cell genomics sequencing (SCGS) have allowed researchers to characterize tumor heterozygosity with unprecedented resolution and reveal the phylogenetic relationships between tumor cells or clones. However, high sequencing error rates of current SCGS data, i.e., false positives, false negatives, and missing bases, severely limit its application. Here, we present a deep learning framework, RDAClone, to recover genotype matrices from noisy data with an extended robust deep autoencoder, cluster cells into subclones by the Louvain-Jaccard method, and further infer evolutionary relationships between subclones by the minimum spanning tree. Studies on both simulated and real datasets demonstrate its robustness and superiority in data denoising, cell clustering, and evolutionary tree reconstruction, particularly for large datasets.


2005 ◽  
Vol 03 (01) ◽  
pp. 79-98
Author(s):  
HON-WAI LEONG ◽  
FRANCO P. PREPARATA ◽  
WING-KIN SUNG ◽  
HUGO WILLY

We consider the problem of sequence reconstruction in sequencing-by-hybridization in the presence of spectrum errors. As suggested by intuition, and reported in the literature, false-negatives (i.e., missing spectrum probes) are by far the leading cause of reconstruction failures. In a recent paper we have described an algorithm, called "threshold-θ", designed to recover from false negatives. This algorithm is based on overcompensating for missing extensions by allowing larger reconstruction subtrees. We demonstrated, both analytically and with simulations, the increasing effectiveness of the approach as the parameter θ grows, but also pointed out that for larger error rates the size of the extension trees translates into an unacceptable computational burden. To obviate this shortcoming, in this paper we propose an adaptive approach which is both effective and efficient. Effective, because for a fixed value of θ it performs as well as its single-threshold counterpart, efficient because it exhibits substantial speed-ups over it. The idea is that, for moderate error rates a small fraction of the target sequence can be involved in error recovery; thus, expectedly the remainder of the sequence is reconstructible by the standard noiseless algorithm, with the provision to switch to operation with increasingly higher thresholds after detecting failure. This policy generates interesting and complex interplays between fooling probes and false negatives. These phenomena are carefully analyzed for random sequences and the results are found to be in excellent agreement with the simulations. In addition, the experimental algorithmic speed-ups of the multithreshold approach are explained in terms of the interaction amongst the different threshold regimes.


2019 ◽  
Vol 28 (4) ◽  
pp. 1411-1431 ◽  
Author(s):  
Lauren Bislick ◽  
William D. Hula

Purpose This retrospective analysis examined group differences in error rate across 4 contextual variables (clusters vs. singletons, syllable position, number of syllables, and articulatory phonetic features) in adults with apraxia of speech (AOS) and adults with aphasia only. Group differences in the distribution of error type across contextual variables were also examined. Method Ten individuals with acquired AOS and aphasia and 11 individuals with aphasia participated in this study. In the context of a 2-group experimental design, the influence of 4 contextual variables on error rate and error type distribution was examined via repetition of 29 multisyllabic words. Error rates were analyzed using Bayesian methods, whereas distribution of error type was examined via descriptive statistics. Results There were 4 findings of robust differences between the 2 groups. These differences were found for syllable position, number of syllables, manner of articulation, and voicing. Group differences were less robust for clusters versus singletons and place of articulation. Results of error type distribution show a high proportion of distortion and substitution errors in speakers with AOS and a high proportion of substitution and omission errors in speakers with aphasia. Conclusion Findings add to the continued effort to improve the understanding and assessment of AOS and aphasia. Several contextual variables more consistently influenced breakdown in participants with AOS compared to participants with aphasia and should be considered during the diagnostic process. Supplemental Material https://doi.org/10.23641/asha.9701690


2020 ◽  
Vol 29 (4) ◽  
pp. 1944-1955 ◽  
Author(s):  
Maria Schwarz ◽  
Elizabeth C. Ward ◽  
Petrea Cornwell ◽  
Anne Coccetti ◽  
Pamela D'Netto ◽  
...  

Purpose The purpose of this study was to examine (a) the agreement between allied health assistants (AHAs) and speech-language pathologists (SLPs) when completing dysphagia screening for low-risk referrals and at-risk patients under a delegation model and (b) the operational impact of this delegation model. Method All AHAs worked in the adult acute inpatient settings across three hospitals and completed training and competency evaluation prior to conducting independent screening. Screening (pass/fail) was based on results from pre-screening exclusionary questions in combination with a water swallow test and the Eating Assessment Tool. To examine the agreement of AHAs' decision making with SLPs, AHAs ( n = 7) and SLPs ( n = 8) conducted an independent, simultaneous dysphagia screening on 51 adult inpatients classified as low-risk/at-risk referrals. To examine operational impact, AHAs independently completed screening on 48 low-risk/at-risk patients, with subsequent clinical swallow evaluation conducted by an SLP with patients who failed screening. Results Exact agreement between AHAs and SLPs on overall pass/fail screening criteria for the first 51 patients was 100%. Exact agreement for the two tools was 100% for the Eating Assessment Tool and 96% for the water swallow test. In the operational impact phase ( n = 48), 58% of patients failed AHA screening, with only 10% false positives on subjective SLP assessment and nil identified false negatives. Conclusion AHAs demonstrated the ability to reliably conduct dysphagia screening on a cohort of low-risk patients, with a low rate of false negatives. Data support high level of agreement and positive operational impact of using trained AHAs to perform dysphagia screening in low-risk patients.


2020 ◽  
Vol 36 (2) ◽  
pp. 296-302 ◽  
Author(s):  
Luke J. Hearne ◽  
Damian P. Birney ◽  
Luca Cocchi ◽  
Jason B. Mattingley

Abstract. The Latin Square Task (LST) is a relational reasoning paradigm developed by Birney, Halford, and Andrews (2006) . Previous work has shown that the LST elicits typical reasoning complexity effects, such that increases in complexity are associated with decrements in task accuracy and increases in response times. Here we modified the LST for use in functional brain imaging experiments, in which presentation durations must be strictly controlled, and assessed its validity and reliability. Modifications included presenting the components within each trial serially, such that the reasoning and response periods were separated. In addition, the inspection time for each LST problem was constrained to five seconds. We replicated previous findings of higher error rates and slower response times with increasing relational complexity and observed relatively large effect sizes (η2p > 0.70, r > .50). Moreover, measures of internal consistency and test-retest reliability confirmed the stability of the LST within and across separate testing sessions. Interestingly, we found that limiting the inspection time for individual problems in the LST had little effect on accuracy relative to the unconstrained times used in previous work, a finding that is important for future brain imaging experiments aimed at investigating the neural correlates of relational reasoning.


Methodology ◽  
2019 ◽  
Vol 15 (3) ◽  
pp. 97-105
Author(s):  
Rodrigo Ferrer ◽  
Antonio Pardo

Abstract. In a recent paper, Ferrer and Pardo (2014) tested several distribution-based methods designed to assess when test scores obtained before and after an intervention reflect a statistically reliable change. However, we still do not know how these methods perform from the point of view of false negatives. For this purpose, we have simulated change scenarios (different effect sizes in a pre-post-test design) with distributions of different shapes and with different sample sizes. For each simulated scenario, we generated 1,000 samples. In each sample, we recorded the false-negative rate of the five distribution-based methods with the best performance from the point of view of the false positives. Our results have revealed unacceptable rates of false negatives even with effects of very large size, starting from 31.8% in an optimistic scenario (effect size of 2.0 and a normal distribution) to 99.9% in the worst scenario (effect size of 0.2 and a highly skewed distribution). Therefore, our results suggest that the widely used distribution-based methods must be applied with caution in a clinical context, because they need huge effect sizes to detect a true change. However, we made some considerations regarding the effect size and the cut-off points commonly used which allow us to be more precise in our estimates.


Author(s):  
Manuel Perea ◽  
Victoria Panadero

The vast majority of neural and computational models of visual-word recognition assume that lexical access is achieved via the activation of abstract letter identities. Thus, a word’s overall shape should play no role in this process. In the present lexical decision experiment, we compared word-like pseudowords like viotín (same shape as its base word: violín) vs. viocín (different shape) in mature (college-aged skilled readers), immature (normally reading children), and immature/impaired (young readers with developmental dyslexia) word-recognition systems. Results revealed similar response times (and error rates) to consistent-shape and inconsistent-shape pseudowords for both adult skilled readers and normally reading children – this is consistent with current models of visual-word recognition. In contrast, young readers with developmental dyslexia made significantly more errors to viotín-like pseudowords than to viocín-like pseudowords. Thus, unlike normally reading children, young readers with developmental dyslexia are sensitive to a word’s visual cues, presumably because of poor letter representations.


Sign in / Sign up

Export Citation Format

Share Document