manual review
Recently Published Documents


TOTAL DOCUMENTS

164
(FIVE YEARS 80)

H-INDEX

13
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Ben C. Shirley ◽  
Eliseos J Mucaki ◽  
Joan H.M. Knoll ◽  
Peter K Rogan

Background: In a major radiation incident, the speed of sample processing and interpretation of estimated exposures will be critical for triaging individuals. The Automated Dicentric Chromosome (DC) Identifier and Dose Estimator System (ADCI) selects and processes images to identify DCs and determines radiation dose without manual review. The goal of this study was to broaden accessibility and speed of this system with data parallelization while protecting data and software integrity. Methods: ADCI_Online is a secure web-streaming platform that can be accessed worldwide from distributed local nodes. Data and software are separated until they are linked for estimation of radiation exposures. Performance is assessed with data from multiple biodosimetry laboratories. Results: Dose estimates from ADCI_Online are identical to ADCI running on dedicated GPU-accelerated hardware. Metaphase image processing, automated image selection, calibration curve generation, and radiation dose estimation of a typical set of samples of unknown exposures were completed in <2 days. Parallelized processing and analyses using cloned software instances on different hardware configurations of samples at the scale of an intermediate-sized radiation accident (54,595 metaphase images) accelerated estimation of radiation doses to within clinically-relevant time frames. Conclusions: The ADCI_Online streaming platform is intended for on-demand, standardized radiation research assessment, biodosimetry proficiency testing, inter-laboratory comparisons, and training. The platform has the capacity to handle analytic bottlenecks in intermediate to large radiation accidents or events.


Author(s):  
Morgan Macleod ◽  
Elena Anagnostopolou ◽  
Dionysios Mertyris ◽  
Christina Sevdali
Keyword(s):  
Web Site ◽  

Abstract The DiGreC (DIachrony of GREek Case) treebank is a corpus of selected sentences from Greek texts, ranging from Homer to Modern Greek, which have been annotated morphosyntactically and semantically. The corpus comprises excerpts from 655 texts, for a total of 3385 sentences and 56,440 word tokens; automated tagging and lemmatisation has been supplemented with manual review to ensure accuracy. The data exist in xml and csv formats, which can be manipulated and converted automatically to other schemata. A web site has also been created to allow users to interact with the data more easily, and to provide specialised functionality for searching and visualisation. This corpus was created to inform theoretical debates regarding the role of case in grammar, and may be of use to researchers searching for specific attestations of a range of different constructions in Greek.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Meifang Qi ◽  
Utthara Nayar ◽  
Leif S. Ludwig ◽  
Nikhil Wagle ◽  
Esther Rheinbay

Abstract Background Exogenous cDNA introduced into an experimental system, either intentionally or accidentally, can appear as added read coverage over that gene in next-generation sequencing libraries derived from this system. If not properly recognized and managed, this cross-contamination with exogenous signal can lead to incorrect interpretation of research results. Yet, this problem is not routinely addressed in current sequence processing pipelines. Results We present cDNA-detector, a computational tool to identify and remove exogenous cDNA contamination in DNA sequencing experiments. We demonstrate that cDNA-detector can identify cDNAs quickly and accurately from alignment files. A source inference step attempts to separate endogenous cDNAs (retrocopied genes) from potential cloned, exogenous cDNAs. cDNA-detector provides a mechanism to decontaminate the alignment from detected cDNAs. Simulation studies show that cDNA-detector is highly sensitive and specific, outperforming existing tools. We apply cDNA-detector to several highly-cited public databases (TCGA, ENCODE, NCBI SRA) and show that contaminant genes appear in sequencing experiments where they lead to incorrect coverage peak calls. Conclusions cDNA-detector is a user-friendly and accurate tool to detect and remove cDNA detection in NGS libraries. This two-step design reduces the risk of true variant removal since it allows for manual review of candidates. We find that contamination with intentionally and accidentally introduced cDNAs is an underappreciated problem even in widely-used consortium datasets, where it can lead to spurious results. Our findings highlight the importance of sensitive detection and removal of contaminant cDNA from NGS libraries before downstream analysis.


2021 ◽  
Author(s):  
Jung Ho Bae ◽  
Hyun Wook Han ◽  
Sun Young Yang ◽  
Gyuseon Song ◽  
Soonok Sa ◽  
...  

BACKGROUND Manual data extraction for colonoscopy quality indicators is time- and labor-intensive. Natural language processing (NLP), a computer-based linguistics and technique, offers the automation of reporting from unstructured free text reports to extract important clinical information. The application of information extraction using NLP includes identification of clinical information such as adverse events and clinical work optimization such as quality control and patient management. OBJECTIVE We developed a natural language processing pipeline to manage Korean–English colonoscopy reports and evaluated its performance on automatically assessing adenoma detection rate (ADR), sessile serrated lesion detection rate (SDR), and surveillance interval (SI). METHODS The NLP tool was developed using 2000 screening colonoscopy records (1425 pathology reports) at Seoul National University Hospital Gangnam Center. Tests were performed on another 1,000 colonoscopy records to compare a manual review (MR) by five human annotators and the NLP pipeline. Additionally, data from 54,562 colonoscopies of 12,264 patients (aged ≥50 years) from 2010 to 2019 were analyzed using the NLP pipeline for colonoscopy quality indicators. RESULTS The overall accuracy of the test dataset was 95.8% (958/1000) for NLP vs. 93.1% (931/1000) for MR (P=.008). The mean total ADR in the test set was 46.8% (468/1000) with NLP vs. 47.2% (472/1000) with MR. The mean total SDR was 6.4% (64/1000) with NLP vs. 6.5% (65/1000) with MR. Calculating the SI revealed a similar performance between both methods. The mean ADR and SDR of the 25 endoscopists in the 10-year dataset were 42.0% (881/2098) and 3.3% (69/2098), respectively, indicating wide individual variability (16.3% (263/1615)–56.2% (1014/1936) in ADR and 0.4% (6/1615)–6.6% (124/1876) in SDR). The SI recommendation suggested a large difference in ADR and SDR based on the endoscopist’s performance. CONCLUSIONS The NLP pipeline can accurately and automatically calculate ADR, SDR, and SI from a multi-language colonoscopy report. It could be an important tool for improving colonoscopy quality and clinical decision support. CLINICALTRIAL This study was approved by the Institutional Review Board of SNUH (IRB 1909-093-670).


Author(s):  
Stuart Grabham ◽  
Emmanuel Manu

The construction industry has received long standing criticism over its fragmented approach to supply chain management, adversarial relationships, and ongoing defects. Platform thinking has been observed in other industries as a phenomenon that offers reinvention from the traditional perspectives on the supply chain. In this study, a scoping review of platform thinking is presented. A database search of 656 papers across 15 journals, along with 3 sources from a Google search and 12 sources from a manual review of the reference lists were reviewed in relation to platform thinking in construction. While many variants of platforms exist, the scoping review demonstrates a focus on product platforms that has historical precedents. This paper highlights the benefits of platform thinking whilst linking to the lessons of the past. This provides a valuable insight for future implications of platform thinking. This paper contributes to the limited literature on platform thinking in the construction industry by linking historical examples with present and potential future investigation.


Author(s):  
David M Hill ◽  
Allison N Boyd ◽  
Sarah Zavala ◽  
Beatrice Adams ◽  
Melissa Reger ◽  
...  

Abstract Keeping abreast with current literature can be challenging, especially for practitioners caring for patients sustaining thermal or inhalation injury. Practitioners caring for patients with thermal injuries publish in a wide variety of journals, which further increases the complexity for those with resource limitations. Pharmacotherapy research continues to be a minority focus in primary literature. This review is a renewal of previous years’ work to facilitate extraction and review of the most recent pharmacotherapy-centric studies in patients with thermal and inhalation injury. Sixteen geographically dispersed, board-certified pharmacists participated in the review. A MeSH-based, filtered search returned 1,536 manuscripts over the previous 2-year period. After manual review and exclusions, only 98 (6.4%) manuscripts were determined to have a potential impact on current pharmacotherapy practices and included in the review. A summary of the 10 articles that scored highest are included in the review. Nearly half of the reviewed manuscripts were assessed to lack a significant impact on current practice. Despite an increase in published literature over the previous 2-year review, the focus and quality remain unchanged. There remains a need for investment in well-designed, high impact, pharmacotherapy-pertinent research for patients sustaining thermal or inhalation injuries.


2021 ◽  
Author(s):  
Vivek Ashok Rudrapatna ◽  
Yao-Wen Cheng ◽  
Colin Feuille ◽  
Arman Mosenia ◽  
Jonathan Shih ◽  
...  

Objectives: The use of external control arms to support claims of efficacy and safety is growing in interest among drug sponsors and regulators. However, experience with performing these kinds of studies for complex, immune-mediated diseases is limited. We sought to establish a method for creating an external control arm for Crohn's disease. Methods: We queried electronic health records databases and screened records at the University of California, San Francisco to identify patients meeting the major eligibility criteria of TRIDENT, a concurrent trial involving ustekinumab as a reference arm. Timepoints were defined to balance the tradeoff between missing disease activity and bias. We compared two imputation models by their impacts on cohort membership and outcomes. We compared the results of ascertaining disease activity using structured data algorithms against manual review. We used these data to estimate ustekinumab's real-world effectiveness. Results: Screening identified 183 patients. 30% of the cohort had missing baseline data. Two imputation models were tested and had similar effects on cohort definition and outcomes. Algorithms for ascertaining non-symptom-based elements of disease activity were similar in accuracy to manual review. The final cohort consisted of 56 patients. 34% of the cohort was in steroid-free clinical remission by week 24. Conclusions: Differences in the timing and goals of real-world encounters as compared to controlled studies directly translate into significant missing data and lost sample size. Efforts to improve real-world data capture and better align trial design with clinical practice may enable robust external control arm studies and improve trial efficiency.


2021 ◽  
pp. 1062-1075
Author(s):  
David H. Noyd ◽  
Amy Berkman ◽  
Claire Howell ◽  
Steve Power ◽  
Susan G. Kreissman ◽  
...  

PURPOSE Cardiovascular disease is a significant cause of late morbidity and mortality in survivors of childhood cancer. Clinical informatics tools could enhance provider adherence to echocardiogram guidelines for early detection of late-onset cardiomyopathy. METHODS Cancer registry data were linked to electronic health record data. Structured query language facilitated the construction of anthracycline-exposed cohorts at a single institution. Primary outcomes included the data quality from automatic anthracycline extraction, sensitivity of International Classification of Disease coding for heart failure, and adherence to echocardiogram guideline recommendations. RESULTS The final analytic cohort included 385 pediatric oncology patients diagnosed between July 1, 2013, and December 31, 2018, among whom 194 were classified as no anthracycline exposure, 143 had low anthracycline exposure (< 250 mg/m2), and 48 had high anthracycline exposure (≥ 250 mg/m2). Manual review of anthracycline exposure was highly concordant (95%) with the automatic extraction. Among the unexposed group, 15% had an anthracycline administered at an outside institution not captured by standard query language coding. Manual review of echocardiogram parameters and clinic notes yielded a sensitivity of 75%, specificity of 98%, and positive predictive value of 68% for International Classification of Disease coding of heart failure. For patients with anthracycline exposure, 78.5% (n = 62) were adherent to guideline recommendations for echocardiogram surveillance. There were significant association with provider adherence and race and ethnicity ( P = .047), and 50% of patients with Spanish as their primary language were adherent compared with 90% of patients with English as their primary language ( P = .003). CONCLUSION Extraction of treatment exposures from the electronic health record through clinical informatics and integration with cancer registry data represents a feasible approach to assess cardiovascular disease outcomes and adherence to guideline recommendations for survivors.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Neil S. Zheng ◽  
V. Eric Kerchberger ◽  
Victor A. Borza ◽  
H. Nur Eken ◽  
Joshua C. Smith ◽  
...  

AbstractThe MEDication-Indication (MEDI) knowledgebase has been utilized in research with electronic health records (EHRs) since its publication in 2013. To account for new drugs and terminology updates, we rebuilt MEDI to overhaul the knowledgebase for modern EHRs. Indications for prescribable medications were extracted using natural language processing and ontology relationships from six publicly available resources: RxNorm, Side Effect Resource 4.1, Mayo Clinic, WebMD, MedlinePlus, and Wikipedia. We compared the estimated precision and recall between the previous MEDI (MEDI-1) and the updated version (MEDI-2) with manual review. MEDI-2 contains 3031 medications and 186,064 indications. The MEDI-2 high precision subset (HPS) includes indications found within RxNorm or at least three other resources. MEDI-2 and MEDI-2 HPS contain 13% more medications and over triple the indications compared to MEDI-1 and MEDI-1 HPS, respectively. Manual review showed MEDI-2 achieves the same precision (0.60) with better recall (0.89 vs. 0.79) compared to MEDI-1. Likewise, MEDI-2 HPS had the same precision (0.92) and improved recall (0.65 vs. 0.55) than MEDI-1 HPS. The combination of MEDI-1 and MEDI-2 achieved a recall of 0.95. In updating MEDI, we present a more comprehensive medication-indication knowledgebase that can continue to facilitate applications and research with EHRs.


2021 ◽  
Vol 11 (18) ◽  
pp. 8578
Author(s):  
Yi-Cheng Huang ◽  
Ting-Hsueh Chuang ◽  
Yeong-Lin Lai

Trap-neuter-return (TNR) has become an effective solution to reduce the prevalence of stray animals. Due to the non-culling policy for stray cats and dogs since 2017, there is a great demand for the sterilization of cats and dogs in Taiwan. In 2020, Heart of Taiwan Animal Care (HOTAC) had more than 32,000 cases of neutered cats and dogs. HOTAC needs to take pictures to record the ears and excised organs of each neutered cat or dog from different veterinary hospitals. The correctness of the archived medical photos and the different shooting and imaging angles from different veterinary hospitals must be carefully reviewed by human professionals. To reduce the cost of manual review, Yolo’s ensemble learning based on deep learning and a majority voting system can effectively identify TNR surgical images, save 80% of the labor force, and its average accuracy (mAP) exceeds 90%. The best feature extraction based on the Yolo model is Yolov4, whose mAP reaches 91.99%, and the result is integrated into the voting classification. Experimental results show that compared with the previous manual work, it can decrease the workload by more than 80%.


Sign in / Sign up

Export Citation Format

Share Document