scholarly journals A More Complete Story of the Genetic Bases of Resistance to the Rice Hoja Blanca Virus

Author(s):  
Alexander Silva ◽  
María Elker Montoya ◽  
Constanza Quintero ◽  
Juan Cuasquer ◽  
Joe Tohme ◽  
...  

Abstract Rice hoja blanca is one of the most serious diseases in rice growing areas in tropical Americas. Its causal agent is the Rice hoja blanca virus (RHBV), transmitted by the planthopper Tagosodes orizicolus Müir. Genetic resistance is the most effective and environment-friendly way of controlling the disease. So far, only one major quantitative trait locus (QTL) of Oryza sativa ssp. japonica origin, qHBV4.1, that alters incidence of the virus symptoms in two Colombian cultivars has been reported. This resistance has already started to be broken, stressing the urgent need for diversifying the resistance sources. In the present study we performed a search for new QTLs of O. sativa indica origin associated with RHBV resistance. We used four F2:3 segregating populations derived from indica resistant varieties crossed with a highly susceptible japonica pivot parent. Beside the standard method for measuring disease incidence, we developed a new method based on computer-assisted image processing to determine the affected leaf area (ALA) as a measure of symptoms severity. Based on the disease severity and incidence scores in the F3 families under greenhouse conditions, and SNP genotyping of the F2 individuals, we identified four new indica QTLs for RHBV resistance on rice chromosomes 4, 6 and 11, namely qHBV4.2WAS208, qHBV6.1PTB25, qHBV11.1 and qHBV11.2. We also confirmed the wide-range action of qHBV4.1. Among the five QTLs, qHBV4.1 and qHBV11.1 had the largest effects on incidence and severity, respectively. These results provide a more complete understanding of the genetic bases of RHBV resistance in the cultivated rice gene pool, and can be used to develop marker-aided breeding strategies to improve RHBV resistance. The power of joint- and meta- analyses allowed precise mapping and candidate genes identification, providing the basis for positional cloning of the two major QTLs qHBV4.1 and qHBV11.1.

Author(s):  
Munazza Fatima ◽  
Kara J. O’Keefe ◽  
Wenjia Wei ◽  
Sana Arshad ◽  
Oliver Gruebner

The outbreak of SARS-CoV-2 in Wuhan, China in late December 2019 became the harbinger of the COVID-19 pandemic. During the pandemic, geospatial techniques, such as modeling and mapping, have helped in disease pattern detection. Here we provide a synthesis of the techniques and associated findings in relation to COVID-19 and its geographic, environmental, and socio-demographic characteristics, following the Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) methodology for scoping reviews. We searched PubMed for relevant articles and discussed the results separately for three categories: disease mapping, exposure mapping, and spatial epidemiological modeling. The majority of studies were ecological in nature and primarily carried out in China, Brazil, and the USA. The most common spatial methods used were clustering, hotspot analysis, space-time scan statistic, and regression modeling. Researchers used a wide range of spatial and statistical software to apply spatial analysis for the purpose of disease mapping, exposure mapping, and epidemiological modeling. Factors limiting the use of these spatial techniques were the unavailability and bias of COVID-19 data—along with scarcity of fine-scaled demographic, environmental, and socio-economic data—which restrained most of the researchers from exploring causal relationships of potential influencing factors of COVID-19. Our review identified geospatial analysis in COVID-19 research and highlighted current trends and research gaps. Since most of the studies found centered on Asia and the Americas, there is a need for more comparable spatial studies using geographically fine-scaled data in other areas of the world.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Spyridoula Vazou ◽  
Collin A. Webster ◽  
Gregory Stewart ◽  
Priscila Candal ◽  
Cate A. Egan ◽  
...  

Abstract Background/Objective Movement integration (MI) involves infusing physical activity into normal classroom time. A wide range of MI interventions have succeeded in increasing children’s participation in physical activity. However, no previous research has attempted to unpack the various MI intervention approaches. Therefore, this study aimed to systematically review, qualitatively analyze, and develop a typology of MI interventions conducted in primary/elementary school settings. Subjects/Methods Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed to identify published MI interventions. Irrelevant records were removed first by title, then by abstract, and finally by full texts of articles, resulting in 72 studies being retained for qualitative analysis. A deductive approach, using previous MI research as an a priori analytic framework, alongside inductive techniques were used to analyze the data. Results Four types of MI interventions were identified and labeled based on their design: student-driven, teacher-driven, researcher-teacher collaboration, and researcher-driven. Each type was further refined based on the MI strategies (movement breaks, active lessons, other: opening activity, transitions, reward, awareness), the level of intrapersonal and institutional support (training, resources), and the delivery (dose, intensity, type, fidelity). Nearly half of the interventions were researcher-driven, which may undermine the sustainability of MI as a routine practice by teachers in schools. An imbalance is evident on the MI strategies, with transitions, opening and awareness activities, and rewards being limitedly studied. Delivery should be further examined with a strong focus on reporting fidelity. Conclusions There are distinct approaches that are most often employed to promote the use of MI and these approaches may often lack a minimum standard for reporting MI intervention details. This typology may be useful to effectively translate the evidence into practice in real-life settings to better understand and study MI interventions.


Genetics ◽  
2000 ◽  
Vol 156 (1) ◽  
pp. 457-467 ◽  
Author(s):  
Z W Luo ◽  
S H Tao ◽  
Z-B Zeng

Abstract Three approaches are proposed in this study for detecting or estimating linkage disequilibrium between a polymorphic marker locus and a locus affecting quantitative genetic variation using the sample from random mating populations. It is shown that the disequilibrium over a wide range of circumstances may be detected with a power of 80% by using phenotypic records and marker genotypes of a few hundred individuals. Comparison of ANOVA and regression methods in this article to the transmission disequilibrium test (TDT) shows that, given the genetic variance explained by the trait locus, the power of TDT depends on the trait allele frequency, whereas the power of ANOVA and regression analyses is relatively independent from the allelic frequency. The TDT method is more powerful when the trait allele frequency is low, but much less powerful when it is high. The likelihood analysis provides reliable estimation of the model parameters when the QTL variance is at least 10% of the phenotypic variance and the sample size of a few hundred is used. Potential use of these estimates in mapping the trait locus is also discussed.


Genetics ◽  
1999 ◽  
Vol 153 (2) ◽  
pp. 993-1007 ◽  
Author(s):  
Cristian Vlăduţu ◽  
John McLaughlin ◽  
Ronald L Phillips

AbstractQuantitative trait locus (QTL) mapping has detected two linked QTL in the 8L chromosome arm segment introgressed from Gaspé Flint (a Northern Flint open-pollinated population) into the background of N28 (a Corn Belt Dent inbred line). Homozygous recombinant lines, with a variable length of the introgressed segment, confirmed the presence of the two previously identified, linked QTL. In the N28 background, Gaspé Flint QTL alleles at both loci induce a reduction in node number, height, and days to anthesis (pollen shed). Given the determinate growth pattern of maize, the phenotypic effects indicate that the two QTL are involved in the transition of the apical meristem from vegetative to generative structures. Relative to the effects of the two QTL in the background of N28, we distinguish two general developmental factors affecting the timing of pollen shed. The primary factor is the timing of the transition of the apical meristem. The second, derivative factor is the global extent of internode elongation. Having separated the two linked QTL, we have laid the foundation for the positional cloning of the QTL with a larger effect.


2021 ◽  
Vol 14 (3) ◽  
pp. 1-26
Author(s):  
Andrea Asperti ◽  
Stefano Dal Bianco

We provide a syllabification algorithm for the Divine Comedy using techniques from probabilistic and constraint programming. We particularly focus on the synalephe , addressed in terms of the "propensity" of a word to take part in a synalephe with adjacent words. We jointly provide an online vocabulary containing, for each word, information about its syllabification, the location of the tonic accent, and the aforementioned synalephe propensity, on the left and right sides. The algorithm is intrinsically nondeterministic, producing different possible syllabifications for each verse, with different likelihoods; metric constraints relative to accents on the 10th, 4th, and 6th syllables are used to further reduce the solution space. The most likely syllabification is hence returned as output. We believe that this work could be a major milestone for a lot of different investigations. From the point of view of digital humanities it opens new perspectives on computer-assisted analysis of digital sources, comprising automated detection of anomalous and problematic cases, metric clustering of verses and their categorization, or more foundational investigations addressing, e.g., the phonetic roles of consonants and vowels. From the point of view of text processing and deep learning, information about syllabification and the location of accents opens a wide range of exciting perspectives, from the possibility of automatic learning syllabification of words and verses to the improvement of generative models, aware of metric issues, and more respectful of the expected musicality.


Cerâmica ◽  
2014 ◽  
Vol 60 (356) ◽  
pp. 465-470 ◽  
Author(s):  
D. P. C. Velazco ◽  
E. F. Sancet ◽  
F. Urbaneja ◽  
M. Piccico ◽  
M. F. Serra ◽  
...  

Computer assisted designing (CAD) is well known for several decades and employed for ceramic manufacturing almost since the beginning, but usually employed in the first part of the projectual ideation processes, neither in the prototyping nor in the manufacturing stages. The rapid prototyping machines, also known as 3D printers, have the capacity to produce in a few hours real pieces using plastic materials of high resistance, with great precision and similarity with respect to the original, based on unprecedented digital models produced by means of modeling with specific design software or from the digitalization of existing parts using the so-called 3D scanners. The main objective of the work is to develop the methodology used in the entire process of building a part in ceramics from the interrelationship between traditional techniques and new technologies for the manufacture of prototypes. And to take advantage of the benefits that allow us this new reproduction technology. The experience was based on the generation of a complex piece, in digital format, which served as the model. A regular 15 cm icosahedron presented features complex enough not to advise the production of the model by means of the traditional techniques of ceramics (manual or mechanical). From this digital model, a plaster mold was made in the traditional way in order to slip cast clay based slurries, freely dried in air and fired and glazed in the traditional way. This experience has shown the working hypothesis and opens up the possibility of new lines of work to academic and technological levels that will be explored in the near future. This technology provides a wide range of options to address the formal aspect of a part to be performed for the field of design, architecture, industrial design, the traditional pottery, ceramic art, etc., which allow you to amplify the formal possibilities, save time and therefore costs when drafting the necessary and appropriate matrixes to each requirement.


2018 ◽  
Vol 69 (12) ◽  
pp. 1882 ◽  
Author(s):  
Elena-Maria Klopries ◽  
Zhiqun Daniel Deng ◽  
Theresa U. Lachmann ◽  
Holger Schüttrumpf ◽  
Bradly A. Trumbo

Surface bypasses are downstream migration structures that can help reduce hydropower-induced damage to migrating fish. However, no comprehensive design concept that facilitates good surface bypass performance for a wide range of sites and species is available. This is why fish-passage efficiencies at recently built bypass structures vary widely between 0% and up to 97%. We reviewed 50 surface bypass performance studies and existing guidelines for salmonids, eels and potamodromous species to identify crucial design criteria for surface bypasses employed in North America, Europe and Australia. Two-tailed Pearson correlation of bypass efficiency and bypass design criteria shows that bypass entrance area (r=0.3300, P=0.0036) and proportion of inflow to the bypass (r=0.3741, P=0.0032) are the most influential parameters on bypass efficiency. However, other parameters such as guiding structures (P=0.2181, ordinary Student’s t-test) and trash-rack spacing (r=–0.1483, P=0.3951, Spearman correlation), although not statistically significant, have been shown to have an effect on efficiency in some studies. The use of different performance criteria and efficiency definitions for bypass evaluation hampers direct comparison of studies and, therefore, deduction of design criteria. To enable meta-analyses and improve bypass design considerations, we suggest a list of standardised performance parameters for bypasses that should be considered in future bypass-performance studies.


2015 ◽  
Author(s):  
Peter Weiland ◽  
Ina Dehnhard

See video of the presentation.The benefits of making research data permanently accessible through data archives is widely recognized: costs can be reduced by reusing existing data, research results can be compared and validated with results from archived studies, fraud can be more easily detected, and meta-analyses can be conducted. Apart from that, authors may gain recognition and reputation for producing the datasets. Since 2003, the accredited research data center PsychData (part of the Leibniz Institute for Psychology Information in Trier, Germany) documents and archives research data from all areas of psychology and related fields. In the beginning, the main focus was on datasets that provide a high potential for reuse, e.g. longitudinal studies, large-scale cross sectional studies, or studies that were conducted during historically unique conditions. Presently, more and more journal publishers and project funding agencies require researchers to archive their data and make them accessible for the scientific community. Therefore, PsychData also has to serve this need.In this presentation we report on our experiences in operating a discipline-specific research data archive in a domain where data sharing is met with considerable resistance. We will focus on the challenges for data sharing and data reuse in psychology, e.g.large amount of domain-specific knowledge necessary for data curationhigh costs for documenting the data because of a wide range on non-standardized measuressmall teams and little established infrastructures compared with the "big data" disciplinesstudies in psychology not designed for reuse (in contrast to the social sciences)data protectionresistance to sharing dataAt the end of the presentation, we will provide a brief outlook on DataWiz, a new project funded by the German Research Foundation (DFG). In this project, tools will be developed to support researchers in documenting their data during the research phase.


Sign in / Sign up

Export Citation Format

Share Document