scholarly journals Dissection of Molecular Processes and Genetic Architecture Underlying Iron and Zinc Homeostasis for Biofortification: From Model Plants to Common Wheat

2020 ◽  
Vol 21 (23) ◽  
pp. 9280
Author(s):  
Jingyang Tong ◽  
Mengjing Sun ◽  
Yue Wang ◽  
Yong Zhang ◽  
Awais Rasheed ◽  
...  

The micronutrients iron (Fe) and zinc (Zn) are not only essential for plant survival and proliferation but are crucial for human health. Increasing Fe and Zn levels in edible parts of plants, known as biofortification, is seen a sustainable approach to alleviate micronutrient deficiency in humans. Wheat, as one of the leading staple foods worldwide, is recognized as a prioritized choice for Fe and Zn biofortification. However, to date, limited molecular and physiological mechanisms have been elucidated for Fe and Zn homeostasis in wheat. The expanding molecular understanding of Fe and Zn homeostasis in model plants is providing invaluable resources to biofortify wheat. Recent advancements in NGS (next generation sequencing) technologies coupled with improved wheat genome assembly and high-throughput genotyping platforms have initiated a revolution in resources and approaches for wheat genetic investigations and breeding. Here, we summarize molecular processes and genes involved in Fe and Zn homeostasis in the model plants Arabidopsis and rice, identify their orthologs in the wheat genome, and relate them to known wheat Fe/Zn QTL (quantitative trait locus/loci) based on physical positions. The current study provides the first inventory of the genes regulating grain Fe and Zn homeostasis in wheat, which will benefit gene discovery and breeding, and thereby accelerate the release of Fe- and Zn-enriched wheats.

Author(s):  
Xavier Farré ◽  
Nino Spataro ◽  
Frederic Haziza ◽  
Jordi Rambla ◽  
Arcadi Navarro

Abstract Motivation Association studies based on SNP arrays and Next Generation Sequencing technologies have enabled the discovery of thousands of genetic loci related to human diseases. Nevertheless, their biological interpretation is still elusive, and their medical applications limited. Recently, various tools have been developed to help bridging the gap between genomes and phenomes. To our knowledge, however none of these tools allows users to retrieve the phenotype-wide list of genetic variants that may be linked to a given disease or to visually explore the joint genetic architecture of different pathologies. Results We present the Genome-Phenome Explorer (GePhEx), a web-tool easing the visual exploration of phenotypic relationships supported by genetic evidences. GePhEx is primarily based on the thorough analysis of linkage disequilibrium between disease-associated variants and also considers relationships based on genes, pathways or drug-targets, leveraging on publicly available variant-disease associations to detect potential relationships between diseases. We demonstrate that GePhEx does retrieve well-known relationships as well as novel ones, and that, thus, it might help shedding light on the patho-physiological mechanisms underlying complex diseases. To this end, we investigate the potential relationship between schizophrenia and lung cancer, first detected using GePhEx and provide further evidence supporting a functional link between them. Availability and implementation GePhEx is available at: https://gephex.ega-archive.org/. Supplementary information Supplementary data are available at Bioinformatics online.


2019 ◽  
Vol 14 (2) ◽  
pp. 157-163
Author(s):  
Majid Hajibaba ◽  
Mohsen Sharifi ◽  
Saeid Gorgin

Background: One of the pivotal challenges in nowadays genomic research domain is the fast processing of voluminous data such as the ones engendered by high-throughput Next-Generation Sequencing technologies. On the other hand, BLAST (Basic Local Alignment Search Tool), a longestablished and renowned tool in Bioinformatics, has shown to be incredibly slow in this regard. Objective: To improve the performance of BLAST in the processing of voluminous data, we have applied a novel memory-aware technique to BLAST for faster parallel processing of voluminous data. Method: We have used a master-worker model for the processing of voluminous data alongside a memory-aware technique in which the master partitions the whole data in equal chunks, one chunk for each worker, and consequently each worker further splits and formats its allocated data chunk according to the size of its memory. Each worker searches every split data one-by-one through a list of queries. Results: We have chosen a list of queries with different lengths to run insensitive searches in a huge database called UniProtKB/TrEMBL. Our experiments show 20 percent improvement in performance when workers used our proposed memory-aware technique compared to when they were not memory aware. Comparatively, experiments show even higher performance improvement, approximately 50 percent, when we applied our memory-aware technique to mpiBLAST. Conclusion: We have shown that memory-awareness in formatting bulky database, when running BLAST, can improve performance significantly, while preventing unexpected crashes in low-memory environments. Even though distributed computing attempts to mitigate search time by partitioning and distributing database portions, our memory-aware technique alleviates negative effects of page-faults on performance.


Pathogens ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 144
Author(s):  
William Little ◽  
Caroline Black ◽  
Allie Clinton Smith

With the development of next generation sequencing technologies in recent years, it has been demonstrated that many human infectious processes, including chronic wounds, cystic fibrosis, and otitis media, are associated with a polymicrobial burden. Research has also demonstrated that polymicrobial infections tend to be associated with treatment failure and worse patient prognoses. Despite the importance of the polymicrobial nature of many infection states, the current clinical standard for determining antimicrobial susceptibility in the clinical laboratory is exclusively performed on unimicrobial suspensions. There is a growing body of research demonstrating that microorganisms in a polymicrobial environment can synergize their activities associated with a variety of outcomes, including changes to their antimicrobial susceptibility through both resistance and tolerance mechanisms. This review highlights the current body of work describing polymicrobial synergism, both inter- and intra-kingdom, impacting antimicrobial susceptibility. Given the importance of polymicrobial synergism in the clinical environment, a new system of determining antimicrobial susceptibility from polymicrobial infections may significantly impact patient treatment and outcomes.


2020 ◽  
Vol 36 (12) ◽  
pp. 3669-3679 ◽  
Author(s):  
Can Firtina ◽  
Jeremie S Kim ◽  
Mohammed Alser ◽  
Damla Senol Cali ◽  
A Ercument Cicek ◽  
...  

Abstract Motivation Third-generation sequencing technologies can sequence long reads that contain as many as 2 million base pairs. These long reads are used to construct an assembly (i.e. the subject’s genome), which is further used in downstream genome analysis. Unfortunately, third-generation sequencing technologies have high sequencing error rates and a large proportion of base pairs in these long reads is incorrectly identified. These errors propagate to the assembly and affect the accuracy of genome analysis. Assembly polishing algorithms minimize such error propagation by polishing or fixing errors in the assembly by using information from alignments between reads and the assembly (i.e. read-to-assembly alignment information). However, current assembly polishing algorithms can only polish an assembly using reads from either a certain sequencing technology or a small assembly. Such technology-dependency and assembly-size dependency require researchers to (i) run multiple polishing algorithms and (ii) use small chunks of a large genome to use all available readsets and polish large genomes, respectively. Results We introduce Apollo, a universal assembly polishing algorithm that scales well to polish an assembly of any size (i.e. both large and small genomes) using reads from all sequencing technologies (i.e. second- and third-generation). Our goal is to provide a single algorithm that uses read sets from all available sequencing technologies to improve the accuracy of assembly polishing and that can polish large genomes. Apollo (i) models an assembly as a profile hidden Markov model (pHMM), (ii) uses read-to-assembly alignment to train the pHMM with the Forward–Backward algorithm and (iii) decodes the trained model with the Viterbi algorithm to produce a polished assembly. Our experiments with real readsets demonstrate that Apollo is the only algorithm that (i) uses reads from any sequencing technology within a single run and (ii) scales well to polish large assemblies without splitting the assembly into multiple parts. Availability and implementation Source code is available at https://github.com/CMU-SAFARI/Apollo. Supplementary information Supplementary data are available at Bioinformatics online.


2014 ◽  
Vol 563 ◽  
pp. 379-383 ◽  
Author(s):  
Yue Yang ◽  
Xin Jun Du ◽  
Ping Li ◽  
Bin Liang ◽  
Shuo Wang

More and more attention has been paid to filamentous fungal evolution, metabolic pathway and gene functional analysis via genome sequencing. However, the published methods for the extraction of fungal genomic DNA were usually costly or inefficient. In the present study, we compared five different DNA extraction protocols: CTAB protocol with some modifications, benzyl chloride protocol with some modifications, snailase protocol, SDS protocol and extraction with the E.Z.N.A. Fungal DNA Maxi Kit (Omega Bio-Tek, USA). The CTAB method which we established with some modification in several steps is not only economical and convenient, but also can be reliably used to obtain large amounts of highly pure genomic DNA fromMonascus purpureusfor sequencing with next-generation sequencing technologies (Illumina and 454) successfully.


2008 ◽  
Vol 18 (10) ◽  
pp. 1638-1642 ◽  
Author(s):  
D. R. Smith ◽  
A. R. Quinlan ◽  
H. E. Peckham ◽  
K. Makowsky ◽  
W. Tao ◽  
...  

2011 ◽  
Vol 16 (11-12) ◽  
pp. 512-519 ◽  
Author(s):  
Peter M. Woollard ◽  
Nalini A.L. Mehta ◽  
Jessica J. Vamathevan ◽  
Stephanie Van Horn ◽  
Bhushan K. Bonde ◽  
...  

Author(s):  
Giulio Caravagna

AbstractCancers progress through the accumulation of somatic mutations which accrue during tumour evolution, allowing some cells to proliferate in an uncontrolled fashion. This growth process is intimately related to latent evolutionary forces moulding the genetic and epigenetic composition of tumour subpopulations. Understanding cancer requires therefore the understanding of these selective pressures. The adoption of widespread next-generation sequencing technologies opens up for the possibility of measuring molecular profiles of cancers at multiple resolutions, across one or multiple patients. In this review we discuss how cancer genome sequencing data from a single tumour can be used to understand these evolutionary forces, overviewing mathematical models and inferential methods adopted in field of Cancer Evolution.


Sign in / Sign up

Export Citation Format

Share Document