test order
Recently Published Documents


TOTAL DOCUMENTS

132
(FIVE YEARS 38)

H-INDEX

13
(FIVE YEARS 3)

2021 ◽  
Vol 8 (Supplement_1) ◽  
pp. S440-S440
Author(s):  
Akshay M Khatri ◽  
Rehana Rasul ◽  
Molly McCann-Pineo ◽  
Rebecca Schwartz ◽  
Aradhana Khameraj ◽  
...  

Abstract Background In 2017, the multiplex respiratory viral panel (RVP) test was the only test available for patients (pts) with respiratory symptoms in our emergency department (ED). In 2018, the more rapid influenza/respiratory syncytial virus (Flu/RSV) test was incorporated in a stratified testing algorithm (STA) – depending on clinical features and physician discretion, pts underwent either Flu/RSV or RVP. We analyzed the STA impact by comparing data between winters of 2017 and 2018. Methods In a retrospective, single-center cohort study in suburban NY, admitted pts ≥18 years diagnosed with viral infections (by either test) were included. We excluded pts diagnosed at another hospital, admitted to intensive care or observation (< 24 hours) units and pts with missing data. Data was collected through electronic medical chart review. Primary outcomes were clinical evaluation time [time between triage and test order]; laboratory-turnaround (lta) time (between order and result); ED length of stay [EDLOS] (between admit order and bed assignment). Secondary outcomes included isolation time (between result to start of isolation precautions), treatment time (between result to influenza treatment). Outcome differences were assessed using Chi-Square and Mann-Whitney rank sum tests for categorical and continuous variables, respectively. Results 734 pts were included in the study [368 in 2017; 366 in 2018]. Median age was 75 years and 55.9% were female. After implementing the STA, EDLOS was significantly lower [Table 1], with no significant differences in other parameters. Lta times were slightly higher after implementation [25 minutes (2017) v/s 29 minutes (2018)]. Table 1. Differences in clinical and laboratory turnaround times among patients admitted with viral infections in winters of 2017 and 2018 Conclusion A stratified diagnostic algorithm may have reduced EDLOS, but without significant differences in other outcomes. A higher lta time might have been due to testing constraints, heterogeneous pt populations or other confounders. Prospective studies will help assess the real-world impact of such algorithms. Disclosures Prashant Malhotra, MBBS, MD,FACP, FIDSA, Gilead Sciences (Scientific Research Study Investigator, Other Financial or Material Support, Site PI for a industry funded multi center research study)


2021 ◽  
Vol 39 (28_suppl) ◽  
pp. 16-16
Author(s):  
Scott D. Goldfarb ◽  
Kimmie K. McLaurin ◽  
Barbara L. McAneny ◽  
Veena Shetty ◽  
Julia Engstrom-Melnyk ◽  
...  

16 Background: Opportunities have increased for diagnostic test results to affect treatment choice in tumor types with homologous recombination deficiencies. According to NCCN Guidelines, biomarker testing has the potential to identify patients eligible for targeted treatment such as PARP inhibitors. Methods: We conducted a noninterventional, mixed-methods cohort study to evaluate biomarker testing concordant with NCCN guidelines in 2018-19. Starting with an abstraction of structured and unstructured data from electronic health records, a cohort of 300 patients newly diagnosed with advanced ovarian cancer (aOC), HER2-negative metastatic breast cancer (MBC), metastatic pancreatic cancer (mPaC), and metastatic prostate cancer (mPC) was selected in reverse chronological order of diagnosis date, proportionately distributed from the NCCA (National Cancer Care Alliance, LLC) network. Outcomes included: proportion of patients who completed biomarker testing (defined as at least BRCA1/2 testing), time from diagnosis to test order, and time from test order to results. For patients that did not receive a biomarker test, the treating physicians were sent questionnaires to capture reasons for not ordering or for non-completion of biomarker testing. Results: Patients were identified at 10 practices from 8 states (CA, ME, NM, NY, OH, TX, UT, VT) in 2018 (N=86) and 2019 (N=214). The most commonly used tests were germline only (47%-66%), followed by tissue for multiple genes (18%-40%). Ovarian cancer had the highest completion of biomarker testing (Table). For HER2-negative MBC, the completion rate of BRCA testing was 85% for triple-negative disease and 55% for hormone receptor-positive disease. All questionnaires were completed (N=85); the most common reasons identified as barriers to testing were no perceived need or clinical benefit (42%), biomarker tests considered for a later date depending on patient’s response to treatment (14%), lack of a standard practice or guidance for biomarker testing at the practice (14%), and reimbursement for genetic counseling (12%). Conclusions: Biomarker workup was completed for the majority of patients. Given the first FDA approval of a PARP inhibitor was for aOC in 2014, biomarker testing rates and timing may naturally improve for more recent approvals such as mPC (2020). Evaluation of long-term trends for adherence to NCCN biomarker testing recommendations on the impact to patient outcomes is warranted. [Table: see text]


Author(s):  
Christy Wynn Moland ◽  
Janna B. Oetting

Purpose We compared the Risk subtest of the Diagnostic Evaluation of Language Variation–Screening Test (DELV–Screening Test Risk) with two other screeners when administered to low-income prekindergartners (pre-K) who spoke African American English (AAE) in the urban South. Method Participants were 73 children (six with a communication disorder and 67 without) enrolled in Head Start or a publicly funded pre-K in an urban Southern city. All children completed the DELV–Screening Test Risk, the Fluharty Preschool Speech and Language Screening Test–Second Edition (FLUHARTY-2), and the Washington and Craig Language Screener (WCLS). Test order was counterbalanced across participants. Results DELV–Screening Test Risk error scores were higher than those reported for its standardization sample, and scores on the other screeners were lower than their respective standardization/testing samples. The 52% fail rate of the DELV–Screening Test Risk did not differ significantly from the 48% rate of the WCLS. Fail rates of the FLUHARTY-2 ranged from 34% to 75%, depending on the quotient considered and whether scoring was modified for dialect. Although items and subtests assumed to measure similar constructs were correlated to each other, the three screeners led to inconsistent pass/fail outcomes for 44% of the children. Conclusions Like other screeners, the DELV–Screening Test Risk subtest may lead to high fail rates for low-income pre-K children who speak AAE in the urban South. Inconsistent outcomes across screeners underscore the critical need for more study and development of screeners within the field.


2021 ◽  
Author(s):  
Peter Lush

Seeing a fake hand brushed in synchrony with brushstrokes to a participant’s hand (the rubber hand illusion; RHI) prompts reports of referred touch, illusory ownership and that the real hand has drifted toward the fake hand (proprioceptive drift). According to one theory, RHI effects are attributable to multisensory integration mechanisms, but they may alternatively (or additionally) reflect the generation of experience to meet expectancies arising from demand characteristics (phenomenological control). Multisensory integration accounts are supported by contrasting synchronous and asynchronous brush stroking conditions, typically presented in counter-balanced order. This contrast is known to be confounded by demand characteristics, but to date there has been no exploration of the role of demand characteristics relating to condition-order. In an exploratory study, existing data from a rubber hand study (n = 124) were analysed to test order effects. Synchronous condition illusion report and the difference between synchronous and asynchronous conditions in both report and proprioceptive drift were greater when the asynchronous condition was performed first (and therefore participants were exposed to the questionnaire materials). These order effects have implications for interpretation of reports of ownership experience: in particular, there was no mean ownership agreement in the synchronous-first group. These data support the theory that reports of ownership of a rubber hand are at least partially attributable to phenomenological control in response to demand characteristics.


2021 ◽  
Vol 132 ◽  
pp. 106507
Author(s):  
Miao Zhang ◽  
Jacky Wai Keung ◽  
Tsong Yueh Chen ◽  
Yan Xiao

2021 ◽  
Vol 30 (1) ◽  
pp. 160-169
Author(s):  
Yang-Soo Yoon ◽  
Callie Michelle Boren ◽  
Brianna Diaz

Purpose To measure the effect of testing conditions (in the soundproof booth vs. quiet room), test order, and number of test sessions on spectral and temporal processing in normal-hearing (NH) listeners. Method Thirty-two adult NH listeners participated in the three experiments. For all three experiments, the stimuli were presented to the left ear at the subjects' most comfortable level through headphones. All tests were administered in an adaptive three-alternative forced-choice paradigm. Experiment 1 was designed to compare the effect of soundproof booth and quiet room test conditions on amplitude modulation detection threshold and modulation frequency discrimination threshold with each of the five modulation frequencies. Experiment 2 was designed to compare the effect of two test orders on the frequency discrimination thresholds under the quiet room test conditions. The thresholds were first measured in the ascending and descending order of four pure tones, and then with counterbalanced order. For Experiment 3, the amplitude discrimination threshold under the quiet room testing condition was assessed 3 times to determine the effect of the number of test sessions. Then the thresholds were compared over the sessions. Results Results showed no significant effect of test environment. The test order is an important variable for frequency discrimination, particularly between piano tunes and pure tones. Results also show no significant difference across test sessions. Conclusions These results suggest that a controlled test environment may not be required in spectral and temporal assessment for NH listeners. Under the quiet test environment, a single outcome measure is sufficient, but test orders should be counterbalanced.


2021 ◽  
Vol 27 (Supplement_1) ◽  
pp. S50-S50
Author(s):  
Abhishek Verma ◽  
Sanskriti Varma ◽  
Daniel Freedberg ◽  
David Hudesman ◽  
Shannon Chang ◽  
...  

Abstract Background Guidelines recommend testing inflammatory bowel disease (IBD) patients hospitalized with flare for Clostridioides difficile infection (CDI), though little is known about whether a delay in testing for CDI is related to adverse outcomes. We examined the relationship between time-to-C. difficile PCR test order, collection, and result with adverse IBD outcomes. Methods We performed a retrospective cohort study of IBD patients hospitalized with flare through the emergency department (ED) between 2013 and 2020 at an urban academic medical center. The time from ED presentation to C. difficile test order (time-to-order), sample collection (time-to-collection), and test result (time-to-result) were collected. Time-to-result was stratified by within 6 hours, 6–24 hours, and 24 hours or longer. The primary outcome was length of stay (LOS). Secondary outcomes were inpatient anti-TNF administration and surgery. We used hemodynamic and laboratory values at presentation to evaluate disease severity as a confounding variable between length of stay and time-dependent variables. Results We identified 122 IBD patients hospitalized with flare. There were no significant differences in baseline characteristics among time-to-result groups. Despite a shorter time-to-result, the average LOS in the 6 hours group was 7.3 days, longer compared to the 6–24 hours group (4.3 days, p=0.018) and the 24 hours group (4.2 days, p=0.035; Table 1). There were no differences in inpatient anti-TNF administration (p=0.10) or surgery (p=0.08) among time-to-result groups. The markers of disease severity that correlated with longer LOS were C-reactive protein (CRP) (0.28 days, p=0.003), heart rate (0.478 days, p<0.001), diastolic hypotension (0.228 days, p=0.01), and hypoalbuminemia (0.215 days, p=0.02). Higher CRP correlated with earlier time-to-result (-0.218 hours, p=0.02). Patients with more markers of disease severity had earlier times-to-result (12.8 hours vs. 32.2 hours, p=0.014) and had a longer LOS (7.9 vs. 3.4 days, p=0.007) (Table 2). Patients with more severe disease had an earlier time-to-order (4.48 hours) compared to those with less severe disease (17.4 hours), though this difference did not meet statistical significance (p=0.09; Table 2). Conclusion Earlier time-to-result for CDI is associated with longer LOS in IBD patients hospitalized with flare. This inverse relationship is confounded by disease severity at presentation: patients with more severe disease have a shorter time-to-result and a longer LOS. It may be that these patients produce a stool sample more readily; however, the near significance of differences in time-to-order among severity groups suggest a role for provider bias, which must be studied further. Delay in testing was not associated with higher rates of inpatient anti-TNF administration or surgery.


2021 ◽  
Vol 129 ◽  
pp. 106438
Author(s):  
Miao Zhang ◽  
Jacky Wai Keung ◽  
Yan Xiao ◽  
Md Alamgir Kabir

2020 ◽  
Vol 11 ◽  
Author(s):  
Anett Wolgast ◽  
Nico Schmidt ◽  
Jochen Ranger

Different types of tasks exist, including tasks for research purposes or exams assessing knowledge. According to expectation-value theory, tests are related to different levels of effort and importance within a test taker. Test-taking effort and importance in students decreased over the course of high-stakes tests or low-stakes-tests in research on test-taking motivation. However, whether test-order changes affect effort, importance, and response processes of education students have seldomly been experimentally examined. We aimed to examine changes in effort and importance resulting from variations in test battery order and their relations to response processes. We employed an experimental design assessing N = 320 education students’ test-taking effort and importance three times as well as their performance on cognitive ability tasks and a mock exam. Further relevant covariates were assessed once such as expectancies, test anxiety, and concentration. We randomly varied the order of the cognitive ability test and mock exam. The assumption of intraindividual changes in education students’ effort and importance over the course of test taking was tested by one latent growth curve that separated data for each condition. In contrast to previous studies, responses and test response times were included in diffusion models for examining education students’ response processes within the test-taking context. The results indicated intraindividual changes in education students’ effort or importance depending on test order but similar mock-exam response processes. In particular effort did not decrease, when the cognitive ability test came first and the mock exam subsequently but significantly decreased, when the mock exam came first and the cognitive ability test subsequently. Diffusion modeling suggested differences in response processes (separation boundaries and estimated latent trait) on cognitive ability tasks suggesting higher motivational levels when the cognitive ability test came first than vice versa. The response processes on the mock exam tasks did not relate to condition.


Sign in / Sign up

Export Citation Format

Share Document