quality checks
Recently Published Documents


TOTAL DOCUMENTS

183
(FIVE YEARS 100)

H-INDEX

13
(FIVE YEARS 6)

2021 ◽  
Vol 9 (12) ◽  
pp. 2560
Author(s):  
Abdolrahman Khezri ◽  
Ekaterina Avershina ◽  
Rafi Ahmad

Emerging new sequencing technologies have provided researchers with a unique opportunity to study factors related to microbial pathogenicity, such as antimicrobial resistance (AMR) genes and virulence factors. However, the use of whole-genome sequence (WGS) data requires good knowledge of the bioinformatics involved, as well as the necessary techniques. In this study, a total of nine Escherichia coli and Klebsiella pneumoniae isolates from Norwegian clinical samples were sequenced using both MinION and Illumina platforms. Three out of nine samples were sequenced directly from blood culture, and one sample was sequenced from a mixed-blood culture. For genome assembly, several long-read, (Canu, Flye, Unicycler, and Miniasm), short-read (ABySS, Unicycler and SPAdes) and hybrid assemblers (Unicycler, hybridSPAdes, and MaSurCa) were tested. Assembled genomes from the best-performing assemblers (according to quality checks using QUAST and BUSCO) were subjected to downstream analyses. Flye and Unicycler assemblers performed best for the assembly of long and short reads, respectively. For hybrid assembly, Unicycler was the top-performing assembler and produced more circularized and complete genome assemblies. Hybrid assembled genomes performed substantially better in downstream analyses to predict putative plasmids, AMR genes and β-lactamase gene variants, compared to MinION and Illumina assemblies. Thus, hybrid assembly has the potential to reveal factors related to microbial pathogenicity in clinical and mixed samples.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 567
Author(s):  
Christina Vasilopoulou ◽  
Benjamin Wingfield ◽  
Andrew P. Morris ◽  
William Duddy

Quality control of genomic data is an essential but complicated multi-step procedure, often requiring separate installation and expert familiarity with a combination of different bioinformatics tools. Software incompatibilities, and inconsistencies across computing environments, are recurrent challenges, leading to poor reproducibility. Existing semi-automated or automated solutions lack comprehensive quality checks, flexible workflow architecture, and user control. To address these challenges, we have developed snpQT: a scalable, stand-alone software pipeline using nextflow and BioContainers, for comprehensive, reproducible and interactive quality control of human genomic data. snpQT offers some 36 discrete quality filters or correction steps in a complete standardised pipeline, producing graphical reports to demonstrate the state of data before and after each quality control procedure. This includes human genome build conversion, population stratification against data from the 1,000 Genomes Project, automated population outlier removal, and built-in imputation with its own pre- and post- quality controls. Common input formats are used, and a synthetic dataset and comprehensive online tutorial are provided for testing, educational purposes, and demonstration. The snpQT pipeline is designed to run with minimal user input and coding experience; quality control steps are implemented with numerous user-modifiable thresholds, and workflows can be flexibly combined in custom combinations. snpQT is open source and freely available at https://github.com/nebfield/snpQT. A comprehensive online tutorial and installation guide is provided through to GWAS (https://snpqt.readthedocs.io/en/latest/), introducing snpQT using a synthetic demonstration dataset and a real-world Amyotrophic Lateral Sclerosis SNP-array dataset.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Shengli Song ◽  
Miriam Manook ◽  
Jean Kwun ◽  
Annette M. Jackson ◽  
Stuart J. Knechtle ◽  
...  

AbstractMultiplex immunoassays with acellular antigens are well-established based on solid-phase platforms such as the Luminex® technology. Cell barcoding by amine-reactive fluorescent dyes enables analogous cell-based multiplex assays, but requires multiple labeling reactions and quality checks prior to every assay. Here we describe generation of stable, fluorescent protein-barcoded reporter cell lines suitable for multiplex screening of antibody to membrane proteins. The utility of this cell-based system, with the potential of a 256-plex cell panel, is demonstrated by flow cytometry deconvolution of barcoded cell panels expressing influenza A hemagglutinin trimers, or native human CCR2 or CCR5 multi-span proteins and their epitope-defining mutants. This platform will prove useful for characterizing immunity and discovering antibodies to membrane-associated proteins.


Author(s):  
R. A. B. Rivera ◽  
E. N. B. Idago ◽  
A. C. Blanco ◽  
K. A. P. Vergara

Abstract. With the problem of informal settlements in the Philippines, mapping such areas is the first step towards improvement. Object-based image analysis (OBIA) has been a powerful tool for mapping and feature extraction, especially for high-resolution datasets. In this study, an informal settlement area in UP Diliman, Quezon City was chosen to be the subject site, where individual informal settlement structures (ISS) were delineated and estimated using OBIA. With the help of photogrammetry and image enhancement techniques, derivatives such as elevation model and orthophotos were produced for easier interpretation. An initial rule-set was developed to remove all non-ISS features from the base image–utilizing spectral values and thematic layers as main classifiers. This classification technique yielded a 94% accuracy for non-ISS class, and 92% for the possible ISS class. Another rule-set was then developed to delineate individual ISS based on the texture and elevation model of the area, which paved the way for the estimation of ISS count. To test the robustness of the methodology developed, the estimation results were compared to the manual count obtained through an online survey form, and the classification and delineation results were assessed through overall and individual quality checks. The estimation yielded a relative accuracy of 60%, which came from the delineation rate of 63%. On the other hand, delineation accuracy was calculated through area-based and number-based measures, yielding 58% and 95%, respectively. Issues such as noisy elevation models and physical limitations of the area and survey done affected the accuracy of the results.


2021 ◽  
Author(s):  
Ferran Espuny Pujol ◽  
Christina Pagel ◽  
Katherine L Brown ◽  
James C Doidge ◽  
Richard G Feltbower ◽  
...  

Objectives To link five national datasets (three registries, two administrative) and create longitudinal health care trajectories for patients with congenital heart disease (CHD), describing the quality and the summary statistics of the linked dataset. Design Bespoke linkage of record-level patient identifiers across five national datasets. Generation of spells of care defined as periods of time-overlapping events across the datasets. Setting National congenital heart disease audit (NCHDA) procedures in public (NHS) hospitals in England and Wales, paediatric and adult intensive care datasets (PICANet and ICNARC-CMP), administrative hospital episodes (HES inpatient, outpatient, A&E), and mortality registry data. Participants Patients with any CHD procedure recorded in NCHDA between April 2000 and March 2017 from public hospitals. Primary and secondary outcome measures Primary outcomes: Number of linked records, number of unique patients and number of generated spells of care (e.g. inpatient stays, outpatient appointments). Secondary outcomes: Quality and completeness of linkage. Results There were 143,862 records in NCHDA relating to 96,041 unique patients. We identified 65,797 linked PICANet patient admissions, 4,664 linked ICNARC-CMP admissions, and over 6 million linked HES episodes of health care (1.1M Inpatient, 4.7M Outpatient). The 96,041 unique patients had 4,908,153 spells of care comprising 6,481,600 records after quality checks. Considering only years where datasets overlapped, 95.6% surgical procedure records were linked to a corresponding HES record, 93.9% paediatric (cardiac) surgery procedure records were linked to a corresponding PICANet admission, and 76.8% adult surgery procedure records were linked to a corresponding ICNARC-CMP record. Conclusions We successfully linked four national datasets to the core dataset of all CHD procedures performed between 2000 and 2017. This will enable a much richer analysis of longitudinal patient journeys and outcomes. We hope that our detailed description of the linkage process will be useful to others looking to link national datasets to address important research priorities.


Blood ◽  
2021 ◽  
Vol 138 (Supplement 1) ◽  
pp. 4632-4632
Author(s):  
Merryl Lobo ◽  
Alice Motovylyak ◽  
Madhuri Madasu ◽  
Rohit Sood

Abstract Myelofibrosis (MF) is a type of chronic blood cancer characterized by bone marrow fibrosis, extramedullary hematopoiesis and splenomegaly. Approximately 89% of patients present palpable splenomegaly with a compromised quality of life and reduced survival. The International Working Group Myeloproliferative Neoplasm Research and Treatment (IWG-MRT) criteria utilize spleen volume (SV) as part of clinical improvement (CI) response in MF trials. These include evaluation of spleen response as a primary/ secondary endpoint - defined as ≥35% SV reduction from baseline. Progressive disease is defined as ≥25% increase in SV from baseline level. Magnetic resonance imaging (MRI) and Computed tomography (CT) provide a non-invasive way to assess change in spleen size both spatially and temporally in a clinical study. While image acquisition with optimized and harmonized protocols is key, a central independent review of images also plays a critical role in correct patient outcome determination. The aim of this study is to determine if a double read model for central independent review is necessary to maintain a high accuracy of SV estimation. To this effect, alignment among independent readers over review criteria was assessed: inter-reader variability (IRV), in addition to assessment of consistency in review approach: intra-reader variability (ARV). Retrospective analysis was implemented on imaging data across 12 multi-center MF trials-MRI/CT images of two time-points (baseline and 1 follow-up) from 142 trial participants for ARV and 85 trial participants for IRV analysis. All images passed image quality checks and were processed for manual segmentation of the spleen by image analysts, followed by an over-read by trained radiologists. The spleen volume was calculated as the sum of spleen cross-sectional area across all slices multiplied by slice interval. For ARV analysis, the images were presented to the same readers in a blinded fashion at least three weeks after the initial review. For IRV analysis, images read by a primary reader were then presented to the secondary reader. The percent discrepancy for ARV and IRV were calculated as the ratio of difference between primary and secondary spleen volumes, divided by the average of the two. The average ARV discrepancy was 0.37±0.55 % (mean±standard deviation) as shown in Fig 1a. Zero subjects had an ARV discrepancy of more than 5%. As shown in Fig 2a, majority of the cases were under an ARV discrepancy of 1%. These results show excellent consistency in approach of readers over time in comparison to 2.8±3.5% reportedby Harris et al (European Journal of Radiology, 2010). For IRV, the average discrepancy was 0.62±0.85 % as shown in Fig 1b. 1.1% of cases had an IRV discrepancy of more than 5%. As shown in Fig 2b, most of the cases were within an ARV discrepancy of 1%. These results show a high level of alignment between readers in their imaging review approach in comparison to 6.4±9.8% reportedby Harris et al (European Journal of Radiology, 2010). The high level of reliability and repeatability seen across radiological reads suggests that a single read model is sufficient to assess imaging volumetrics-based endpoints. It is important to note that a multi-step approach was used to thoroughly train, test and monitor independent readers throughout the study duration. Readers were chosen based on high level of experience with the indication and analysis application. Reader onboarding involved an accurate overview and clear instruction on the review assessments. Multiple MRI/ CT imaging cases were utilized for reader testing and training. Since image quality can be a significant factor influencing the confidence level of a reader, these cases reflected examples of imaging artifacts expected on such trials, such as motion artifacts, low image resolution, ghosting and low contrast to noise ratio. Routine quality checks and variability assessments were done throughout the trial duration, with prompt corrective action taken to prevent inaccuracy of study results. These actions included issuing training points or re-read of cases that contained established error. Further work is necessary on assessing how variables such as spleen size, imaging artifacts and change in imaging modality affect reader variability. Figure 1 Figure 1. Disclosures No relevant conflicts of interest to declare.


Safety ◽  
2021 ◽  
Vol 7 (4) ◽  
pp. 76
Author(s):  
Tristan W. Casey ◽  
Hannah M. Mason ◽  
Jasmine Huang ◽  
Richard C. Franklin

Injuries sustained while performing electrical work are a significant threat to the health and safety of workers and occur frequently. In some jurisdictions, non-fatal serious incidents have increased in recent years. Although significant work has been carried out on electrical safety from a human factor perspective, reviews of this literature are sparse. Thus, the purpose of this review is to collate and summarize human factors implicated in electrical safety events. Articles were collected from three databases (Scopus, Web of Science, and Google Scholar), using the search terms: safety, electri*, human factors, and arc flash. Titles and abstracts were screened, full-text reviews were conducted, and 18 articles were included in the final review. Quality checks were undertaken using the Mixed Methods Appraisal Tool and the Critical Appraisal Skills Program. Environmental, individual, team, organizational, and macro factors were identified in the literature as factors which shape frontline electrical worker behavior, highlighting the complexity of injury prevention. The key contributions of this paper include: (1) a holistic and integrated summary of human factors implicated in electrical safety events, (2) the application of an established theoretical model to explain dynamic forces implicated in electrical safety incidents, and (3) several practical implications and recommendations to improve electrical safety. It is recommended that this framework is used to develop and test future interventions at the individual, team, organizational, and regulator level to mitigate risk and create meaningful and sustainable change in the electrical safety space.


Author(s):  
Anoop Velayudhan ◽  
Suresh Seshadri ◽  
Sujatha Jagadeesan ◽  
Jayanti Saravanan ◽  
Rajesh Yadav ◽  
...  

The Birth Defects Registry of India-Chennai (BDRI-C) was created in 2001 to monitor birth defects and provide timely referrals. Using established guidelines to evaluate surveillance systems, we examined the following attributes of BDRI-C to help strengthen the registry: simplicity, flexibility, data quality, representativeness, acceptability, timeliness, and stability. We reviewed BDRI-C documents, including reporting forms; interviewed key informants; and calculated data completeness, coverage, and reporting time. BDRI-C captured 14% of the births in Chennai April 2013 - March 2014. About 7% of institutions in Chennai registered in BDRI-C, and of those registered, 37% provided data in 2013. Median reporting time was 44 days after birth in 2013. BDRI-C is a useful, simple, flexible, and timely passive birth defects surveillance system; however, improvements can be made to ensure BDRI-C is representative of Chennai, data processing and quality checks are on-going, and the system is acceptable for member institutions and stable. 


2021 ◽  
Author(s):  
Amin Shoari Nejad ◽  
Andrew C. Parnell ◽  
Alice Greene ◽  
Peter Thorne ◽  
Brian P. Kelleher ◽  
...  

Abstract. We provide an updated sea level dataset for Dublin for the period 1938 to 2016 at yearly resolution. Using a newly collated sea level record for Dublin Port, as well as two nearby tide gauges at Arklow and Howth Harbour, we perform data quality checks and calibration of the Dublin Port record by adjusting the biased high water level measurements that affect the overall calculation of mean sea level (MSL). To correct these MSL values, we use a novel Bayesian linear regression that includes the Mean Low Water values as a predictor in the model. We validate the re-created MSL dataset and show its consistency with other nearby tide gauge datasets. Using our new corrected dataset, we estimate a rate of 1.08 mm/yr sea level rise at Dublin Port between 1953–2016 (95 % CI from 0.62 to 1.55 mm/yr), and a rate of 6.48 mm/yr between 1997–2016 (95 % CI 4.22 to 8.80 mm/yr). Overall sea level rise is in line with expected trends but large multidecadal varaibility has led to higher rates of rise in recent years.


2021 ◽  
Vol 99 (Supplement_3) ◽  
pp. 75-75
Author(s):  
Manoj M Lalu

Abstract Publication in scientific journals remains the primary method to disseminate research findings; however, the landscape of scientific publication is rapidly changing. For instance, although open access publication has led to unprecedented opportunities to share information with the global scientific community, it has also contributed to the rise of “predatory journals.” These journals accept fees to publish articles without promised quality checks (e.g. peer review). In order to better understand current publication practices and the threat predatory journals pose, this session will: 1) Briefly summarize the history of scientific publication and how the current model of peer-reviewed publication developed. 2) Define predatory journals and review components of the international consensus definition (false or misleading information, deviation from best editorial and publication practices, lack of transparency, aggressive/indiscriminate solicitation; Nature doi.org/10.1038/d41586-019-03759-y). 3) Summarize empirical studies that have assessed the current burden of predatory journals. A broad group of stakeholders are affected by these journals, including researchers and the public. 4) Provide a practical approach for audience members to distinguish between predatory and legitimate journals. 5) Highlight some key developments that will lead to new publication models in the future.


Sign in / Sign up

Export Citation Format

Share Document