scholarly journals Clinical Implications of Polymicrobial Synergism Effects on Antimicrobial Susceptibility

Pathogens ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 144
Author(s):  
William Little ◽  
Caroline Black ◽  
Allie Clinton Smith

With the development of next generation sequencing technologies in recent years, it has been demonstrated that many human infectious processes, including chronic wounds, cystic fibrosis, and otitis media, are associated with a polymicrobial burden. Research has also demonstrated that polymicrobial infections tend to be associated with treatment failure and worse patient prognoses. Despite the importance of the polymicrobial nature of many infection states, the current clinical standard for determining antimicrobial susceptibility in the clinical laboratory is exclusively performed on unimicrobial suspensions. There is a growing body of research demonstrating that microorganisms in a polymicrobial environment can synergize their activities associated with a variety of outcomes, including changes to their antimicrobial susceptibility through both resistance and tolerance mechanisms. This review highlights the current body of work describing polymicrobial synergism, both inter- and intra-kingdom, impacting antimicrobial susceptibility. Given the importance of polymicrobial synergism in the clinical environment, a new system of determining antimicrobial susceptibility from polymicrobial infections may significantly impact patient treatment and outcomes.

2016 ◽  
Vol 140 (9) ◽  
pp. 958-975 ◽  
Author(s):  
Somak Roy ◽  
William A. LaFramboise ◽  
Yuri E. Nikiforov ◽  
Marina N. Nikiforova ◽  
Mark J. Routbort ◽  
...  

Context.—Next-generation sequencing (NGS) is revolutionizing the discipline of laboratory medicine, with a deep and direct impact on patient care. Although it empowers clinical laboratories with unprecedented genomic sequencing capability, NGS has brought along obvious and obtrusive informatics challenges. Bioinformatics and clinical informatics are separate disciplines with typically a small degree of overlap, but they have been brought together by the enthusiastic adoption of NGS in clinical laboratories. The result has been a collaborative environment for the development of novel informatics solutions. Sustaining NGS-based testing in a regulated clinical environment requires institutional support to build and maintain a practical, robust, scalable, secure, and cost-effective informatics infrastructure. Objective.—To discuss the novel NGS informatics challenges facing pathology laboratories today and offer solutions and future developments to address these obstacles. Data Sources.—The published literature pertaining to NGS informatics was reviewed. The coauthors, experts in the fields of molecular pathology, precision medicine, and pathology informatics, also contributed their experiences. Conclusions.—The boundary between bioinformatics and clinical informatics has significantly blurred with the introduction of NGS into clinical molecular laboratories. Next-generation sequencing technology and the data derived from these tests, if managed well in the clinical laboratory, will redefine the practice of medicine. In order to sustain this progress, adoption of smart computing technology will be essential. Computational pathologists will be expected to play a major role in rendering diagnostic and theranostic services by leveraging “Big Data” and modern computing tools.


Hematology ◽  
2018 ◽  
Vol 2018 (1) ◽  
pp. 286-300
Author(s):  
Rachel E. Rau ◽  
Mignon L. Loh

Abstract Over the past decade, there has been exponential growth in the number of genome sequencing studies performed across a spectrum of human diseases as sequencing technologies and analytic pipelines improve and costs decline. Pediatric hematologic malignancies have been no exception, with a multitude of next generation sequencing studies conducted on large cohorts of patients in recent years. These efforts have defined the mutational landscape of a number of leukemia subtypes and also identified germ-line genetic variants biologically and clinically relevant to pediatric leukemias. The findings have deepened our understanding of the biology of many childhood leukemias. Additionally, a number of recent discoveries may positively impact the care of pediatric leukemia patients through refinement of risk stratification, identification of targetable genetic lesions, and determination of risk for therapy-related toxicity. Although incredibly promising, many questions remain, including the biologic significance of identified genetic lesions and their clinical implications in the context of contemporary therapy. Importantly, the identification of germ-line mutations and variants with possible implications for members of the patient’s family raises challenging ethical questions. Here, we review emerging genomic data germane to pediatric hematologic malignancies.


2019 ◽  
Vol 14 (2) ◽  
pp. 157-163
Author(s):  
Majid Hajibaba ◽  
Mohsen Sharifi ◽  
Saeid Gorgin

Background: One of the pivotal challenges in nowadays genomic research domain is the fast processing of voluminous data such as the ones engendered by high-throughput Next-Generation Sequencing technologies. On the other hand, BLAST (Basic Local Alignment Search Tool), a longestablished and renowned tool in Bioinformatics, has shown to be incredibly slow in this regard. Objective: To improve the performance of BLAST in the processing of voluminous data, we have applied a novel memory-aware technique to BLAST for faster parallel processing of voluminous data. Method: We have used a master-worker model for the processing of voluminous data alongside a memory-aware technique in which the master partitions the whole data in equal chunks, one chunk for each worker, and consequently each worker further splits and formats its allocated data chunk according to the size of its memory. Each worker searches every split data one-by-one through a list of queries. Results: We have chosen a list of queries with different lengths to run insensitive searches in a huge database called UniProtKB/TrEMBL. Our experiments show 20 percent improvement in performance when workers used our proposed memory-aware technique compared to when they were not memory aware. Comparatively, experiments show even higher performance improvement, approximately 50 percent, when we applied our memory-aware technique to mpiBLAST. Conclusion: We have shown that memory-awareness in formatting bulky database, when running BLAST, can improve performance significantly, while preventing unexpected crashes in low-memory environments. Even though distributed computing attempts to mitigate search time by partitioning and distributing database portions, our memory-aware technique alleviates negative effects of page-faults on performance.


2019 ◽  
Vol 9 (1) ◽  
pp. 3 ◽  
Author(s):  
Jai Patel ◽  
Mei Fong ◽  
Megan Jagosky

The 5-year survival probability for patients with metastatic colorectal cancer has not drastically changed over the last several years, nor has the backbone chemotherapy in first-line disease. Nevertheless, newer targeted therapies and immunotherapies have been approved primarily in the refractory setting, which appears to benefit a small proportion of patients. Until recently, rat sarcoma (RAS) mutations remained the only genomic biomarker to assist with therapy selection in metastatic colorectal cancer. Next generation sequencing has unveiled many more potentially powerful predictive genomic markers of therapy response. Importantly, there are also clinical and physiologic predictive or prognostic biomarkers, such as tumor sidedness. Variations in germline pharmacogenomic biomarkers have demonstrated usefulness in determining response or risk of toxicity, which can be critical in defining dose intensity. This review outlines such biomarkers and summarizes their clinical implications on the treatment of colorectal cancer. It is critical that clinicians understand which biomarkers are clinically validated for use in practice and how to act on such test results.


2020 ◽  
Vol 36 (12) ◽  
pp. 3669-3679 ◽  
Author(s):  
Can Firtina ◽  
Jeremie S Kim ◽  
Mohammed Alser ◽  
Damla Senol Cali ◽  
A Ercument Cicek ◽  
...  

Abstract Motivation Third-generation sequencing technologies can sequence long reads that contain as many as 2 million base pairs. These long reads are used to construct an assembly (i.e. the subject’s genome), which is further used in downstream genome analysis. Unfortunately, third-generation sequencing technologies have high sequencing error rates and a large proportion of base pairs in these long reads is incorrectly identified. These errors propagate to the assembly and affect the accuracy of genome analysis. Assembly polishing algorithms minimize such error propagation by polishing or fixing errors in the assembly by using information from alignments between reads and the assembly (i.e. read-to-assembly alignment information). However, current assembly polishing algorithms can only polish an assembly using reads from either a certain sequencing technology or a small assembly. Such technology-dependency and assembly-size dependency require researchers to (i) run multiple polishing algorithms and (ii) use small chunks of a large genome to use all available readsets and polish large genomes, respectively. Results We introduce Apollo, a universal assembly polishing algorithm that scales well to polish an assembly of any size (i.e. both large and small genomes) using reads from all sequencing technologies (i.e. second- and third-generation). Our goal is to provide a single algorithm that uses read sets from all available sequencing technologies to improve the accuracy of assembly polishing and that can polish large genomes. Apollo (i) models an assembly as a profile hidden Markov model (pHMM), (ii) uses read-to-assembly alignment to train the pHMM with the Forward–Backward algorithm and (iii) decodes the trained model with the Viterbi algorithm to produce a polished assembly. Our experiments with real readsets demonstrate that Apollo is the only algorithm that (i) uses reads from any sequencing technology within a single run and (ii) scales well to polish large assemblies without splitting the assembly into multiple parts. Availability and implementation Source code is available at https://github.com/CMU-SAFARI/Apollo. Supplementary information Supplementary data are available at Bioinformatics online.


2014 ◽  
Vol 563 ◽  
pp. 379-383 ◽  
Author(s):  
Yue Yang ◽  
Xin Jun Du ◽  
Ping Li ◽  
Bin Liang ◽  
Shuo Wang

More and more attention has been paid to filamentous fungal evolution, metabolic pathway and gene functional analysis via genome sequencing. However, the published methods for the extraction of fungal genomic DNA were usually costly or inefficient. In the present study, we compared five different DNA extraction protocols: CTAB protocol with some modifications, benzyl chloride protocol with some modifications, snailase protocol, SDS protocol and extraction with the E.Z.N.A. Fungal DNA Maxi Kit (Omega Bio-Tek, USA). The CTAB method which we established with some modification in several steps is not only economical and convenient, but also can be reliably used to obtain large amounts of highly pure genomic DNA fromMonascus purpureusfor sequencing with next-generation sequencing technologies (Illumina and 454) successfully.


2008 ◽  
Vol 18 (10) ◽  
pp. 1638-1642 ◽  
Author(s):  
D. R. Smith ◽  
A. R. Quinlan ◽  
H. E. Peckham ◽  
K. Makowsky ◽  
W. Tao ◽  
...  

2011 ◽  
Vol 16 (11-12) ◽  
pp. 512-519 ◽  
Author(s):  
Peter M. Woollard ◽  
Nalini A.L. Mehta ◽  
Jessica J. Vamathevan ◽  
Stephanie Van Horn ◽  
Bhushan K. Bonde ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document