scholarly journals Half of purposeful sandbaggers undetected by ImPACT's embedded invalidity indicators

Neurology ◽  
2018 ◽  
Vol 91 (23 Supplement 1) ◽  
pp. S4.3-S5
Author(s):  
Courtney Raab ◽  
Amy Peak

ObjectiveThe primary objective of this study is to determine the ability of embedded invalidity indicators (EIIs) within the Immediate Post-concussion Assessment and Cognitive Test (ImPACT) to accurately identify individuals purposefully underperforming (sandbagging) on the baseline assessment. The secondary objective is to determine if any of the 5 specific EIIs are more or less likely to identify purposeful sandbaggers.BackgroundSandbagging baseline neuropsychological tests is a growing problem with significant potential consequences including premature post-concussion clearance.Design/methodsVolunteers were recruited to complete a baseline ImPACT assessment. Participants were randomized to either a control group or a coached sandbagging group. Primary outcome measures were the number of participants identified as invalid via any EII, as well as mean raw composite scores and percentiles for each sub-section within the ImPACT assessment.ResultsSeventy-seven participants (37 control and 40 sandbaggers) completed the study. Only half (50%, n = 20) of the purposeful sandbaggers were identified via any EII. Appropriately, no participants in the control group were identified as invalid. The Working Memory EII correctly identified 40% of the purposeful sandbaggers, and the Three Letters EII identified 35% of purposeful sandbaggers. All other EIIs identified 15% of purposeful sandbaggers. Twenty-six purposeful sandbaggers achieved at least 1 composite sub-score, 1st percentile; 27% of those were not identified via any EII. One participant scored ≤ 1st percentile in every composite category and was not identified via any EII.ConclusionSandbagging baseline ImPACT assessments without detection likely occurs more often than previous literature suggests. Half of purposeful sandbaggers were not identified via current EIIs, and 3 of 5 EIIs identified 15% of purposeful sandbaggers. Re-evaluation or recalibration of ImPACT's current EIIs may be appropriate.Disclosures: Dr. Raab has nothing to disclose. Dr. Peak has nothing to disclose.

2019 ◽  
Vol 35 (3) ◽  
pp. 283-290 ◽  
Author(s):  
Courtney A Raab ◽  
Amy Sutton Peak ◽  
Chad Knoderer

Abstract Objective The main objectives of this study were to determine how accurately the embedded invalidity indicators (EIIs) identify purposeful underperformers on the baseline Immediate Post-concussion Assessment and Cognitive Test (ImPACT); and to assess the effectiveness of each individual EII. Methods A randomized controlled trial was conducted in which all participants completed a baseline ImPACT assessment. Participants were randomized into a control or purposeful underperformance (sandbagging) group. The primary outcomes measured were the number of participants identified as invalid (via any EII), as well as the ability of each individual EII to detect purposeful sandbagging. Additionally, participants mean raw composite scores and percentiles were evaluated. Results Seventy-seven participants completed the study (control n = 37, sandbag n = 40.) None of the participants in the control group, and 50% of the purposeful sandbaggers were identified as invalid via the current EIIs. Of the five EIIs, three were unable to identify more than 15% of purposeful sandbaggers. The best performing EIIs were Word Memory and Three Letters, identifying 40% and 35% of purposeful sandbaggers, respectively. Sixty- five percent of the purposeful sandbaggers had at least one composite score ≤1st percentile. Using a composite score ≤1st percentile as potential marker of invalidity would have accurately identified more purposeful sandbaggers than all existing EIIs combined. Conclusion Half of purposeful sandbaggers were not identified by ImPACT’s current EIIs. Multiple EIIs were only able to identify <15% of purposeful underperformers, suggesting that reevaluation and/or recalibration of EII cutoffs may be appropriate.


2021 ◽  
pp. 000486742110257
Author(s):  
Steve Kisely ◽  
Dante Dangelo-Kemp ◽  
Mark Taylor ◽  
Dennis Liu ◽  
Simon Graham ◽  
...  

Objective: To assess the impact, in the Australian setting, of the COVID-19 lockdown on antipsychotic supplies for patients with schizophrenia following a prescription from a new medical consultation when compared to the same periods in the previous 4 years. A secondary objective was to assess the volume of all antipsychotic supplies, from new and repeat prescriptions, over these same periods. Methods: A retrospective pharmaceutical claims database study was undertaken, using the Department of Human Services Pharmaceutical Benefits Scheme 10% sample. The study population included all adult patients with three or more supplies of oral or long-acting injectable antipsychotics for the treatment of schizophrenia at any time between 1 June 2015 and 31 May 2020. The primary outcome compared volumes of dispensed antipsychotics from new prescriptions (which require a medical consultation) between 1 April and 31 May each year from 2016 to 2020. This was to analyse the period during which the Australian Government imposed a lockdown due to COVID-19 (April to May 2020) when compared the same periods in previous years. Results: There was a small (5.7%) reduction in the number of antipsychotics dispensed from new prescriptions requiring a consultation, from 15,244 to 14,372, between April and May 2019 and the same period in 2020, respectively. However, this reduction was not statistically significant ( p = 0.75) after adjusting for treatment class, age, gender, location and provider type. Conclusion: The COVID-19 restrictions during April and May 2020 had no significant impact on the volume of antipsychotics dispensed from new prescriptions for patients with schizophrenia when compared to the volume of antipsychotics dispensed from new prescriptions during the same period in previous years.


Author(s):  
Sandhya Saisubramanian ◽  
Ece Kamar ◽  
Shlomo Zilberstein

Agents operating in unstructured environments often create negative side effects (NSE) that may not be easy to identify at design time. We examine how various forms of human feedback or autonomous exploration can be used to learn a penalty function associated with NSE during system deployment. We formulate the problem of mitigating the impact of NSE as a multi-objective Markov decision process with lexicographic reward preferences and slack. The slack denotes the maximum deviation from an optimal policy with respect to the agent's primary objective allowed in order to mitigate NSE as a secondary objective. Empirical evaluation of our approach shows that the proposed framework can successfully mitigate NSE and that different feedback mechanisms introduce different biases, which influence the identification of NSE.


2019 ◽  
Vol 17 (3.5) ◽  
pp. BPI19-016
Author(s):  
Nancy Kassem ◽  
Halima El Omri ◽  
Mohamed Yassin ◽  
Shereen Elazzazy

Introduction: Rasburicase is a urate oxidase enzyme used for prophylaxis and treatment of hyperuricemia associated with TLS. The recommended dose of rasburicase is 0.2 mg/kg/day for 5 days; however, recent studies have demonstrated the effectiveness of a single rasburicase dose in prophylaxis and management of hyperuricemia associated with TLS. Our institution’s TLS guideline was updated to recommend the use of a single rasburicase dose (0.2 mg/kg). The primary objective of this study was to assess the efficacy of a single rasburicase dose in controlling uric acid (UA); the secondary objective was to evaluate the impact of the institutional TLS guidelines update on consumption and cost of rasburicase. Methods: This is a single center retrospective cohort study including all patients who received rasburicase from August 2012 to March 2016 at the National Center for Cancer Care and Research (NCCCR) in Qatar. Patients were divided into 2 groups based on the prescribed number of rasburicase doses (single dose vs multiple doses). Collected data included patients’ diagnosis, laboratory parameters rasburicase dose, duration, and number of dispensed vials. UA levels within 24 hours and on day 5 of initial rasburicase dose were evaluated. Risk stratification was determined according to institutional guidelines based on disease, white blood cell count, lactate dehydrogenase level, renal function, and UA level. Results: A total of 103 patients who received rasburicase were evaluated retrospectively; rasburicase was prescribed as single dose for 65 patients (63%) and multiple doses for 38 patients (37%). The majority of patients who received rasburicase as single or multiple doses were at high risk of developing TLS, representing 68% and 84%, respectively. Baseline mean UA levels were similar in both groups: 5.4±2.9 mg/dL vs 4.7±3.2 mg/L respectively (P=.7). Normal or undetectable UA levels were observed within 24 hours in 98% of patients in the single dose group and 100% of patients in the multiple doses group. All patients in both groups had normal UA on day 5 of rasburicase with relatively similar UA levels: 1.5±1.2 mg/dL vs 0.8±1 mg/dL (P=.18). Rasburicase consumption and cost were reduced by 42.5% after the guidelines update. Conclusion: The single rasburicase dose demonstrated efficacy in controlling serum UA levels. Updating the institutional TLS guidelines had a significant impact on rasburicase consumption and led to significant cost reduction.


2020 ◽  
Vol 38 (15_suppl) ◽  
pp. e16042-e16042
Author(s):  
Qirong Geng ◽  
Wenxiu Cheng ◽  
Zhiyu Chen ◽  
Wen Zhang ◽  
Xiaodong Zhu ◽  
...  

e16042 Background: Cetuximab provides a clear clinical benefit in the treatment in patients with RAS wild-type mCRC irrespective of treatment line, but the best sequence is still under investigation. Methods: Patients with RAS wild-type mCRC (2011-2019) who received cetuximab therapy were retrospectively analyzed. They were stratified based on the cetuximab treatment sequence, into the 1st, 2nd, 3rd or later-lines groups. The primary objective was to investigate the impact on Cetuximab sequence (2nd vs. 3rd and later-line) in PFS and OS. As for patients received the 3rd or later-line cetuximab with irinotecan therapy after refractory to the prior 1st and 2nd-line combined chemotherapy with fluoropyrimidine, oxaliplatin, and irinotecan, they will get another PFS of 2nd-line chemotherapy (PFSchemo) besides the PFS of 3rd or later-line cetuximab compared with the cetuximab 2nd-line used patients. We combined the PFSchemo to the PFS of 3rd or later-line cetuximab, then compared with PFS of 2nd -line cetuximab to evaluate the primary objective in PFS. As for the OS of primary objective, we calculated it from start of the 2nd-line of treatment. The secondary objective was to compare the efficacy of cetuximab sequence (1st vs. 2nd and later-lines) in OS calculated from start of the 1st-line of treatment. Results: In total, 193 patients were included: 106 in the 1st, 41 in the 2nd, and 46 in the 3rd-line groups. No difference was observed in baseline characteristics as sex,age,site of primary tumour,number of metastatic sites in the three groups. The median PFS of the 2nd-line and 3rd or later-line groups were 7.1 (95% CI 6.39-7.80) and 13.87 months(95% CI 11.44-16.29) respectively. PFS of the 3rd or later-line group was significantly longer than that of the 2nd-line group (hazard ratio[HR], 0.552; 95% CI, 0.349 to 0.871; P = 0.01). Median OS was 17.8 months (95% CI 13.5-22.1) in the 2nd-line and 27.4 months (95% CI 20.69-34.16) in the 3rd or later-line group (HR, 0.597; 95% CI 0.341 to 1.043; P = 0.07) from start of 2nd-line therapy. The median OS was 28.17 months (95% CI 22.11-34.22) in the 1st-line group and 33.10 months (95% CI 26.88-39.31) in the 2nd and later-lines group (HR, 0.724; 95% CI 0.507 to 1.304; P = 0.075) calculated from the 1st-line of therapy. Conclusions: In a real-world cohort we found that later-line especially 3rd or later-line therapy of cetuximab, may be more benefit for patients with RAS wild type mCRC, as 3rd or later-line use of cetuximab give one more line therapy chance.


2012 ◽  
Vol 33 (6) ◽  
pp. E2 ◽  
Author(s):  
Paul S. Echlin ◽  
Elaine N. Skopelja ◽  
Rachel Worsley ◽  
Shiroy B. Dadachanji ◽  
D. Rob Lloyd-Smith ◽  
...  

Object The primary objective of this study was to measure the incidence of concussion according to a relative number of athlete exposures among 25 male and 20 female varsity ice hockey players. The secondary objective was to present neuropsychological test results between preseason and postseason play and at 72 hours, 2 weeks, and 2 months after concussion. Methods Every player underwent baseline assessments using the Sport Concussion Assessment Tool-2 (SCAT2), Immediate Post-Concussion Assessment and Cognitive Test (ImPACT), and MRI. Each regular season and postseason game was observed by 2 independent observers (a physician and a nonphysician observer). Players with a diagnosed concussion were removed from the game, examined in the team physician's office using the SCAT2 and ImPACT, and sent to undergo MRI. Results Eleven concussions occurred during the 55 physician-observed games (20%). The incidence of concussion, expressed as number of concussions per 1000 athlete exposures, was 10.70 for men and women combined in regular season play, 11.76 for men and women combined across both the regular season and playoff season, 7.50 for men and 14.93 for women in regular season play, and 8.47 for men across both the regular season and playoff season. One male player experienced repeat concussions. No concussions were reported during practice sessions, and 1 concussion was observed and diagnosed in an exhibition game. Neuropsychological testing suggested no statistically significant preseason/postseason differences between athletes who sustained a physician-diagnosed concussion and athletes who did not sustain a physician-diagnosed concussion on either the ImPACT or SCAT2. The athletes who sustained a physician-diagnosed concussion demonstrated few reliable changes postinjury. Conclusions Although the incidence of game-related concussions per 1000 athlete exposures in this study was half the highest rate reported in the authors' previous research, it was 3 times higher than the incidence reported by other authors within the literature concerning men's collegiate ice hockey and 5 times higher than the highest rate previously reported for woman's collegiate ice hockey. Interestingly, the present results suggest a substantively higher incidence of concussion among women (14.93) than men (7.50). The reproducible and significantly higher incidence of concussion among both men and woman ice hockey players, when compared with nonphysician-observed games, suggests a significant underestimation of sports concussion in the scientific literature.


2019 ◽  
Vol 6 ◽  
pp. 205435811988490
Author(s):  
Mark McIsaac ◽  
Gordon Kaban ◽  
Adam Clay ◽  
Warren Berry ◽  
Bhanu Prasad

Background: Obesity is recognized as an independent risk factor for chronic kidney disease through multiple direct and indirect biological pathways. Bariatric surgery is a proven, effective method for sustained weight loss. However, there is a relative paucity of data on the impact of bariatric surgery on renal outcomes. Objective: The primary objective was to evaluate the change in urine albumin/creatinine ratio (ACR) in patients undergoing bariatric surgery, at 12 months after the procedure. Secondary objectives were to determine the changes in ACR at (6 and 24 months), estimated glomerular filtration rate (eGFR; 6, 12, and 24 months), and hemoglobin A1c (HbA1c); 12 and 24 months) after the procedure. Design: This observational retrospective cohort study included consecutive obese patients who underwent bariatric surgery. Setting: Provincial Bariatric Surgery Clinic at the Regina General Hospital, Saskatchewan. Patients: This study includes 471 consecutive obese adult patients who underwent bariatric surgery between 2008 and 2015. Measurements: We studied the impact of bariatric surgery on body mass index (BMI), renal outcomes (urine ACR and eGFR) and metabolic outcomes (fasting glucose, total cholesterol, low-density lipoprotein, triglycerides, and HbA1c) in 471 patients. Methods: Patients were followed for 2 years postsurgery in the bariatric clinic. Mixed linear models that accounted for the repeated nature of the data were used to access changes in outcomes over time. Results: Patients were predominantly female (81%) with a mean age (±SD) of 46 ± 10 years. Most patients (87%) had a BMI > 40 kg/m2 and 81% of the patients underwent Roux-en-Y gastric bypass. The mean BMI decreased from 47.7 ± 7.8 kg/m2 at baseline to 37.1 ± 7.9 kg/m2 at 6 months and 34.8 ± 8.8 kg/m2 at 12 months. In a subcohort of patients with microalbuminuria, ACR showed an improvement from a median [interquartile] value of 5.1 [3.7-7.5] mg/mmol at baseline to 2.3 [1.2-3.6] mg/mmol at 6 months ( P = .007), to 1.4 [0.9-3.7] mg/mmol at 2-year follow-up ( P < .001). Similarly, eGFR increased in patients with microalbuminuria from 109 ± 10 mL/min/1.73 m2 at baseline to 120 ± 36 mL/min/1.73 m2 at 2-year follow-up ( P = .013). There were statistically significant reductions in triglycerides, fasting glucose, and HbA1c. Limitations: This was a retrospective chart review, with the lack of a control group. Patients with eGFR less than 60 mL/min/1.73 m2 were not considered for surgery, and we had to measure renal outcomes predominantly on the presence of proteinuria. Conclusions: Our results suggest bariatric surgery significantly decreased weight and consequently improved renal and metabolic outcomes (eGFR, ACR, fasting glucose, cholesterol, and triglycerides) in patients with elevated BMI.


2015 ◽  
Vol 33 (2) ◽  
pp. 121-128 ◽  
Author(s):  
Hui Zheng ◽  
Wenjing Huang ◽  
Juan Li ◽  
Qianhua Zheng ◽  
Ying Li ◽  
...  

Objective To study whether a higher expectation of acupuncture measured at baseline and after acupuncture is associated with better outcome improvements in patients with migraine. Methods We performed a secondary analysis of a previous published trial in which 476 patients with migraine were randomly allocated to three real acupuncture groups and one sham acupuncture control group. All the participants received 20 sessions of acupuncture over a 4-week period. The primary outcome was the number of days with a migraine attack (NDMA) assessed at 5–8 weeks after randomisation. The secondary outcomes were visual analogue scale, headache intensity and quality of life assessed at 4, 8 and 16 weeks after randomisation. Expectations of the acupuncture effect were assessed at baseline and at the end of treatment and categorised into five levels, with 0% the lowest and 100% the highest. Outcome improvement was first compared among the participants with different expectation levels using an analysis of variance model. The association between expectations of treatment and outcome improvement was then calculated using a logistic regression model. Results Patients with 100% baseline expectations did not report significantly fewer NDMA than those with 0% baseline expectations after adjusting for the covariates (at 5–8 weeks, 1.7 vs 3.9 days, p=0.987). High baseline expectations had no significant impact on improvement of the primary outcome (100% vs 0%: OR 8.50, 95% CI 0.89 to 191.65, p=0.682). However, patients with 100% post-treatment expectations reported fewer NDMA than those with 0% expectations (primary outcome: 1.3 vs 5.0 days, p<0.001) and were more likely to have a favourable response (100% vs 0%: OR 68.87, 95% CI 6.26 to 1449.73, p=0.002). Similar results were found when analysing the impact of expectation on the secondary outcomes. Conclusions A high level of expectation after acupuncture treatment rather than at baseline was associated with better long-term outcome improvements in patients with migraine. Clinical Trial Number NCT00599586.


Author(s):  
Haneen Khreis ◽  
Kees de Hoogh ◽  
Josias Zietsman ◽  
Mark J. Nieuwenhuijsen

Many studies rely on air pollution modeling such as land use regression (LUR) or atmospheric dispersion (AD) modeling in epidemiological and health impact assessments. Generally, these models are only validated using one validation dataset and their estimates at select receptor points are generalized to larger areas. The primary objective of this paper was to explore the effect of different validation datasets on the validation of air quality models. The secondary objective was to explore the effect of the model estimates’ spatial resolution on the models’ validity at different locations. Annual NOx and NO2 were generated using a LUR and an AD model. These estimates were validated against four measurement datasets, once when estimates were made at the exact locations of the validation points and once when estimates were made at the centroid of the 100m×100m grid in which the validation point fell. The validation results varied substantially based on the model and validation dataset used. The LUR models’ R2 ranged between 21% and 58%, based on the validation dataset. The AD models’ R2 ranged between 13% and 56% based on the validation dataset and the use of constant or varying background NOx. The validation results based on model estimates at the exact validation site locations were much better than those based on a 100m×100m grid. This paper demonstrated the value of validating modeled air quality against various datasets and suggested that the spatial resolution of the models’ estimates has a significant influence on the validity at the application point.


2016 ◽  
Vol 30 (1) ◽  
pp. 37-41 ◽  
Author(s):  
Nora H. Sharaya ◽  
Megan F. Dorrell ◽  
Nick A. Sciacca

Purpose: The primary objective of this study was to determine the change in the adherence questionnaire score from the initial pharmacist intervention to 60 to 90 days follow-up. The secondary objective of this study was to investigate the impact of the type of pharmacist intervention on questionnaire scores. Methods: Administration of an adherence questionnaire to guide interventions has become the standard of care for patients during appointments with clinical pharmacy specialists at 3 primary care clinics. Subjects who received a questionnaire between November 4, 2013, and January 15, 2014, were included. These subjects received a second questionnaire 60 to 90 days after the first questionnaire to identify changes resulting from the pharmacist’s interventions. A scoring system was utilized to quantify patients’ responses to both the preintervention and postintervention questionnaires. The type of intervention completed was determined at each pharmacist’s clinical discretion. Results: Adherence scores increased significantly 60 to 90 days after administration of the questionnaire with a pharmacist’s intervention. Medication reminders, simplifying medication regimens, discount program referrals, disease-state information, medication information, and therapeutic interchanges, all increased adherence scores. Conclusion: A standardized tool to assess and address adherence was effectively utilized by 9 pharmacists at 3 clinics. The use of a standardized tool to guide adherence interventions is an effective way to increase adherence to medication therapy.


Sign in / Sign up

Export Citation Format

Share Document