scholarly journals Summaries of harms in systematic reviews are unreliable (Part 2 of 2): Given the same data sources, systematic reviews of gabapentin have different results for harms

2021 ◽  
Author(s):  
Riaz Qureshi ◽  
Evan Mayo-Wilson ◽  
Thanitsara Rittiphairoj ◽  
Mara McAdams-DeMarco ◽  
Eliseo Guallar ◽  
...  

ObjectiveIn this methodologic study (Part 2 of 2), we examined the overlap in sources of evidence and the corresponding results for harms in systematic reviews for gabapentin. Study Design & SettingWe extracted all citations referenced as sources of evidence for harms of gabapentin from 70 systematic reviews, as well as the harms assessed and numerical results. We assessed consistency of harms between pairs of reviews with a high degree of overlap in sources of evidence (>50%) as determined by corrected covered area (CCA). Whereas our focus in this paper is on the results for harms across reviews, Paper 1 examines the methods used to assess harms.ResultsWe found 514 reports cited across 70 included reviews. Most reports (244/514, 48%) were not cited in more than one review. Among 18 pairs of reviews, we found reviews had differences in which harms were assessed and their choice to meta-analyze estimates or present qualitative summaries. When a specific harm was meta-analyzed in a pair of reviews, we found similar effect estimates. ConclusionDifferences in harms results across reviews can occur because the choice of harms is driven by reviewer preferences, rather than standardized approaches to selecting harms for assessment. A paradigm shift is needed in the current approach to synthesizing harms.

2021 ◽  
Author(s):  
Riaz Qureshi ◽  
Evan Mayo-Wilson ◽  
Thanitsara Rittiphairoj ◽  
Mara McAdams-DeMarco ◽  
Eliseo Guallar ◽  
...  

ObjectiveIn this methodologic study (Part 1 of 2), we examined methods used to assess harms in systematic reviews and meta-analyses (SRMAs) of gabapentin. We compared methods used with current recommendations for synthesizing harms. Study Design & SettingWe followed recommended systematic review practices. We selected reliable SRMAs of gabapentin (i.e., met a pre-defined list of methodological criteria) that assessed at least one harm. We extracted and compared methods in four areas: pre-specification, searching, analysis, and reporting. Whereas our focus in this paper is on the methods used, Part 2 examines the results for harms across reviews. ResultsWe screened 4320 records and identified 157 SRMAs of gabapentin, 70 of which were reliable. Most reliable reviews (51/70; 73%) reported following a general guideline for SRMA conduct or reporting, but none reported following recommendations specifically for synthesizing harms. Across all domains assessed, review methods were designed to address questions of benefit and rarely included the additional methods that are recommended for evaluating harms. ConclusionApproaches to assessing harms in SRMAs we examined are tokenistic and unlikely to produce valid summaries of harms to guide decisions. A paradigm shift is needed. At a minimal, reviewers should describe any limitations to their assessment of harms and provide clearer descriptions of methods for synthesizing harms.


Author(s):  
Birgitte Nørgaard ◽  
Eva Draborg ◽  
Jane Andreasen ◽  
Carsten Bogh Juhl ◽  
Jennifer Yost ◽  
...  

2018 ◽  
Vol 46 (3) ◽  
pp. 415-436 ◽  
Author(s):  
Bruce G. Taylor ◽  
Elizabeth A. Mumford ◽  
Weiwei Liu ◽  
Mark Berg ◽  
Maria Bohri

Little is known about the role of conflict management in explaining the victim–offender overlap. This article assesses the victim–offender overlap for adults (18-32) in intimate and nonintimate relationships, covering their relationship with their partner and with friends and acquaintances/strangers. Controlling for conceptually important variables, we explore whether different conflict management styles are associated with a respondent being in the victim-only, offender-only, both, or neither group (separately for verbal aggression, physical abuse for intimate and nonintimate relationships, and sexual abuse for intimate relationships). Data are from a nationally representative panel of U.S. households ( N = 2,284 respondents of whom 871 women and 690 men report being in an intimate partnership). We observed a high degree of overlap between victimization and offending across our abuse measures. We found a range of modestly consistent set of risk factors, for example, conflict management styles and self-control, for the victim–offender overlap for partner and nonpartner abuse experiences.


Author(s):  
Amruta Barhate ◽  
Prakash Bhatia

Background: The COVID-19 pandemic has made the world to come to a standstill. What started as on 16th March 2020, as 114 confirmed cases of COVID‑19 in the country has now reached worrisome figures. The latest world scenario as per WHO as on 30th November, 2020 is as under-World data: 62,509,444 cases, deaths: 1,458,782; USA: 13,082,877 cases, deaths: 263,946; India: 9,431,691 cases, deaths 137, 139. It is evident that worldwide India is number two in case load and there’s no reason to prevent India from becoming number one unless appropriate corrective steps are taken.Methods: The present study has looked into various data sources available in public domain. The study covered a period of almost nine months i.e., from March 2020 to November 2020. The study revealed a steady increase in the number of COVID-19 cases from March 2020 with peak of pandemic occurring in the mid of September and then a steady decline of cases from October.Results: The data analysis shows that after peaking of cases in September, the epidemic will decline in a phased manner by the end of March 2021. Even though there is a decline seen from the month of October, spike of COVID-19 cases was seen in November in some of the states of India. Therefore, we can’t deny the possibility of a second wave of pandemic to occur in the month of December 2020 and January 2021.Conclusions: Hence appropriate and strict control measures have to be put in place for effective control of the Pandemic and its resurgence.


Author(s):  
Abrar Alturkistani ◽  
Ching Lam ◽  
Kimberley Foley ◽  
Terese Stenfors ◽  
Elizabeth R Blum ◽  
...  

BACKGROUND Massive open online courses (MOOCs) have the potential to make a broader educational impact because many learners undertake these courses. Despite their reach, there is a lack of knowledge about which methods are used for evaluating these courses. OBJECTIVE The aim of this review was to identify current MOOC evaluation methods to inform future study designs. METHODS We systematically searched the following databases for studies published from January 2008 to October 2018: (1) Scopus, (2) Education Resources Information Center, (3) IEEE (Institute of Electrical and Electronic Engineers) Xplore, (4) PubMed, (5) Web of Science, (6) British Education Index, and (7) Google Scholar search engine. Two reviewers independently screened the abstracts and titles of the studies. Published studies in the English language that evaluated MOOCs were included. The study design of the evaluations, the underlying motivation for the evaluation studies, data collection, and data analysis methods were quantitatively and qualitatively analyzed. The quality of the included studies was appraised using the Cochrane Collaboration Risk of Bias Tool for randomized controlled trials (RCTs) and the National Institutes of Health—National Heart, Lung, and Blood Institute quality assessment tool for cohort observational studies and for before-after (pre-post) studies with no control group. RESULTS The initial search resulted in 3275 studies, and 33 eligible studies were included in this review. In total, 16 studies used a quantitative study design, 11 used a qualitative design, and 6 used a mixed methods study design. In all, 16 studies evaluated learner characteristics and behavior, and 20 studies evaluated learning outcomes and experiences. A total of 12 studies used 1 data source, 11 used 2 data sources, 7 used 3 data sources, 4 used 2 data sources, and 1 used 5 data sources. Overall, 3 studies used more than 3 data sources in their evaluation. In terms of the data analysis methods, quantitative methods were most prominent with descriptive and inferential statistics, which were the top 2 preferred methods. In all, 26 studies with a cross-sectional design had a low-quality assessment, whereas RCTs and quasi-experimental studies received a high-quality assessment. CONCLUSIONS The MOOC evaluation data collection and data analysis methods should be determined carefully on the basis of the aim of the evaluation. The MOOC evaluations are subject to bias, which could be reduced using pre-MOOC measures for comparison or by controlling for confounding variables. Future MOOC evaluations should consider using more diverse data sources and data analysis methods. INTERNATIONAL REGISTERED REPORT RR2-10.2196/12087


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
James E. Archer ◽  
Charles Baird ◽  
Adrian Gardner ◽  
Alison B. Rushton ◽  
Nicola R. Heneghan

Abstract Background Adult scoliosis represents a distinct subgroup of scoliosis patients for whom the diagnosis can have a large impact on their health-related quality of life (HR-QOL). Therefore, HR-QOL patient-reported outcome measures (PROMs) are essential to assess disease progression and the impact of interventions. The objective of this systematic review is to evaluate the measurement properties of HR-QOL PROMs in adult scoliosis patients. Methods We will conduct a literature search, from their inception onwards, of multiple electronic databases including AMED, CINAHL, EMBASE, Medline, PsychINFO and PubMed. The searches will be performed in two stages. For both stages of the search, participants will be aged 18 and over with a diagnosis of scoliosis. The primary outcome of interest in the stage one searches will be studies which use PROMs to investigate HR-QOL as defined by the Core Outcome Measures in Effectiveness Trials (COMET) taxonomy, the secondary outcome will be to assess the frequency of use of the various PROMs. In stage two, the primary outcome of interest will be studies which assess the measurement properties of the HR-QOL PROMs identified in stage one. No specific measurement property will be given priority. No planned secondary outcomes have been identified but will be reported if discovered. In stage one, the only restriction on study design will be the exclusion of systematic reviews. In Stage two the only restriction on study design will be the exclusion of full-text articles not available in the English language. Two reviewers will independently screen all citations and abstract data. Potential conflicts will be resolved through discussion. The study methodological quality (or risk of bias) will be appraised using the Consensus-based Standards for the selection of Health Measurement Instruments (COSMIN) checklist. The overall strength of the body of evidence will then be assessed using the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) approach. A narrative synthesis will be provided with information presented in the main text and tables to summarise and explain the characteristics and findings of the included studies. The narrative synthesis will explore the evidence for currently used PROMs in adult scoliosis patients and any areas that require further study. Discussion The review will help clinicians and researchers identify a HR-QOL PROM for use in patients with adult scoliosis. Findings from the review will be published and disseminated through a peer-reviewed journal and conference presentations. Systematic review registration This systematic review has been registered with the International Prospective Register of Systematic Reviews (PROSPERO), reference number: CRD42020219437


2021 ◽  
Author(s):  
Jason Meil

<p>Data preparation process generally consumes up to 80% of the Data Scientists time, with 60% of that being attributed to cleaning and labeling data.[1]  Our solution is to use automated pipelines to prepare, annotate, and catalog data. The first step upon ingestion, especially in the case of real world—unstructured and unlabeled datasets—is to leverage Snorkel, a tool specifically designed around a paradigm to rapidly create, manage, and model training data. Configured properly, Snorkel can be leveraged to temper this labeling bottle-neck through a process called weak supervision. Weak supervision uses programmatic labeling functions—heuristics, distant supervision, SME or knowledge base—scripted in python to generate “noisy labels”. The function traverses the entirety of the dataset and feeds the labeled data into a generative—conditionally probabilistic—model. The function of this model is to output the distribution of each response variable and predict the conditional probability based on a joint probability distribution algorithm. This is done by comparing the various labeling functions and the degree to which their outputs are congruent to each other. A single labeling function that has a high degree of congruence with other labeling functions will have a high degree of learned accuracy, that is, the fraction of predictions that the model got right. Conversely, single labeling functions that have a low degree of congruence with other functions will have low learned accuracy. Each prediction is then combined by the estimated weighted accuracy, whereby the predictions of the higher learned functions are counted multiple times. The result yields a transformation from a binary classification of 0 or 1 to a fuzzy label between 0 and 1— there is “x” probability that based on heuristic “n”, the response variable is “y”. The addition of data to this generative model multi-class inference will be made on the response variables positive, negative, or abstain, assigning probabilistic labels to potentially millions of data points. Thus, we have generated a discriminative ground truth for all further labeling efforts and have improved the scalability of our models. Labeling functions can be applied to unlabeled data to further machine learning efforts.<br> <br>Once our datasets are labeled and a ground truth is established, we need to persist the data into our delta lake since it combines the most performant aspects of a warehouse with the low-cost storage for data lakes. In addition, the lake can accept unstructured, semi structured, or structured data sources, and those sources can be further aggregated into raw ingestion, cleaned, and feature engineered data layers.  By sectioning off the data sources into these “layers”, the data engineering portion is abstracted away from the data scientist, who can access model ready data at any time.  Data can be ingested via batch or stream. <br> <br>The design of the entire ecosystem is to eliminate as much technical debt in machine learning paradigms as possible in terms of configuration, data collection, verification, governance, extraction, analytics, process management, resource management, infrastructure, monitoring, and post verification. </p>


2020 ◽  
Vol 4 (Supplement_2) ◽  
pp. 1377-1377
Author(s):  
Karima Benkhedda ◽  
Stephen Brooks ◽  
Linda Greene-Finestone ◽  
Shannon Kelly ◽  
Amanda MacFarlane ◽  
...  

Abstract Objectives To develop and validate a set of 3 quality assessment instruments (QAls) for evaluating the quality of nutrition studies, for each of the commonly used study designs: (1) randomized controlled trials (RCTs), (2) prospective cohort, and (3) case-control studies. Methods The QAI development and validation process included 8 steps: 1) identify and evaluate existing general QAls for adaptation with nutrition-specific quality appraisal items; 2) scan the literature to identify nutrition-specific quality appraisal issues; 3) generate nutrition-specific items to be added to each of the general QAIs, adapt existing guidance for general items for nutrition applications and develop guidance for added nutrition items; 4) review, by two experts in clinical and population nutrition, of the modified general QAIs with added nutrition-specific items and guidance; 5) assess reliability and validity of the QAI for each study design; 6) improve the usability and feasibility, of the QAIs by considering feedback from the validation exercise to refine the wording of the guidance; 7) develop a worksheet to help evaluate, a priori, topic-specific methodology to address risk of bias; and  8) validate the final QAIs using five peer-reviewed studies identified from published systematic reviews with reported quality assessment. Agreement and reliability were determined for each QAI. Results Results of the validation show good to perfect agreement among evaluators for the overall study rating and across domains. When compared to the study quality assessment reported in the systematic review, nutrition- specific items had the greatest impact on study ratings, generally resulting in a downgrade of the overall rating. Conclusions A set of nutrition-specific QAls were developed to assess the quality and robustness of nutrition studies. These tools incorporate general quality issues of study design and conduct, as well as address recognised nutrition study-specific issues. They will improve consistency in how nutrition studies are assessed particularly in nutrition-related systematic reviews. This will contribute to the overall quality of assessment of diet and Funding Sources This work was supported by Health Canada.


Sign in / Sign up

Export Citation Format

Share Document