scholarly journals Genetic evaluation for large data sets by random regression models in Nellore cattle

2009 ◽  
Vol 61 (4) ◽  
pp. 959-967
Author(s):  
P.R.C. Nobre ◽  
A.N. Rosa ◽  
L.O.C. Silva

Expected progeny differences (EPD) of Nellore cattle estimated by random regression model (RRM) and multiple trait model (MTM) were compared. Genetic evaluation data included 3,819,895 records of up nine sequential weights of 963,227 animals measured at ages ranging from one day (birth weight) to 733 days. Traits considered were weights at birth, ten to 110-day old, 102 to 202-day old, 193 to 293-day old, 283 to 383-day old, 376 to 476-day old, 551 to 651-day old, and 633 to 733-day old. Seven data samples were created. Because the parameters estimates biologically were better, two of them were chosen: one with 84,426 records and another with 72,040. Records preadjusted to a fixed age were analyzed by a MTM, which included the effects of contemporary group, age of dam class, additive direct, additive maternal, and maternal permanent environment. Analyses were carried out by REML, with five traits at a time. The RRM included the effects of age of animal, contemporary group, age of dam class, additive direct, permanent environment, additive maternal, and maternal permanent environment. Different degree of Legendre polynomials were used to describe random effects. MTM estimated covariance components and genetic parameters for weight at birth and sequential weights and RRM for all ages. Due to the fact that correlation among the estimates EPD from MTM and all the tested RM were not equal to 1.0, it is not possible to recommend RRM to genetic evaluation to large data sets.

2003 ◽  
Vol 55 (4) ◽  
pp. 480-490 ◽  
Author(s):  
P.R.C. Nobre ◽  
P.S. Lopes ◽  
R.A. Torres ◽  
L.O.C. Silva ◽  
A.J. Regazzi ◽  
...  

Growth curves of Nellore cattle were analyzed using body weights measured at ages ranging from 1 day (birth weight) to 733 days. Traits considered were birth weight, 10 to 110 days weight, 102 to 202 days weight, 193 to 293 days weight, 283 to 383 days weight, 376 to 476 days weight, 551 to 651 days weight, and 633 to 733 days weight. Two data samples were created: one with 79,849 records from herds that had missing traits and another with 74,601 from herds with no missing traits. Records preadjusted to a fixed age were analyzed by a multiple trait model (MTM), which included the effects of contemporary group, age of dam class, additive direct, additive maternal, and maternal permanent environment. Analyses were carried out by a Bayesian method for all nine traits. The random regression model (RRM) included the effects of age of animal, contemporary group, age of dam class, additive direct, permanent environment, additive maternal, and maternal permanent environment. Legendre cubic polynomials were used to describe random effects. MTM estimated covariance components and genetic parameters for birth weight and sequential weights and RRM for all ages. Due to the fact that covariance components based on RRM were inflated for herds with missing traits, MTM should be used and converted to covariance functions.


2003 ◽  
Vol 81 (4) ◽  
pp. 927-932 ◽  
Author(s):  
P. R. C. Nobre ◽  
I. Misztal ◽  
S. Tsuruta ◽  
J. K. Bertrand ◽  
L. O. C. Silva ◽  
...  

2017 ◽  
Vol 95 (1) ◽  
pp. 9-15
Author(s):  
A. Wolc ◽  
J. Arango ◽  
P. Settar ◽  
N. P. O'Sullivan ◽  
J. C. M. Dekkers

Abstract Shell quality is one of the most important traits for improvement in layer chickens. Proper consideration of repeated records can increase the accuracy of estimated breeding values and thus genetic improvement of shell quality. The objective of this study was to compare different models for genetic evaluation of the collected data. For this study, 81,646 dynamic stiffness records on 21,321 brown egg layers and 93,748 records on 24,678 white egg layers from 4 generations were analyzed. Across generations, data were collected at 2 to 4 ages (at approximately 26, 42, 65, and 86 wk), with repeated records at each age. Seven models were compared, including 5 repeatability models with increasing complexity, a random regression model, and a multitrait model. The models were compared using Akaike Information Criteria with significance testing of nested models with a Log Likelihood Ratio test. Estimates of heritability were 0.31–0.36 for the brown line and 0.23–0.26 for the white line, but repeatability was higher for the model with age-specific permanent environment effects (0.59 for both lines) than for the model with an overall permanent environmental effects (0.47 for the brown and 0.41 for the white line). The model that allowed for permanent environmental effect within age and heterogeneous residual variance between ages resulted in improved fit compared to the traditional model that fits single permanent environment and residual effects, but was inferior in fit and predictive ability to the full multiple-trait model. The random regression model had better fit to the data than repeatability models but slightly worse than the multiple-trait model. For traits with repeated records at different ages, repeatability within and across ages as well as genetic correlations should be considered while choosing the number of records collected per individual as well as the model for genetic evaluation.


Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Author(s):  
Thomas W. Shattuck ◽  
James R. Anderson ◽  
Neil W. Tindale ◽  
Peter R. Buseck

Individual particle analysis involves the study of tens of thousands of particles using automated scanning electron microscopy and elemental analysis by energy-dispersive, x-ray emission spectroscopy (EDS). EDS produces large data sets that must be analyzed using multi-variate statistical techniques. A complete study uses cluster analysis, discriminant analysis, and factor or principal components analysis (PCA). The three techniques are used in the study of particles sampled during the FeLine cruise to the mid-Pacific ocean in the summer of 1990. The mid-Pacific aerosol provides information on long range particle transport, iron deposition, sea salt ageing, and halogen chemistry.Aerosol particle data sets suffer from a number of difficulties for pattern recognition using cluster analysis. There is a great disparity in the number of observations per cluster and the range of the variables in each cluster. The variables are not normally distributed, they are subject to considerable experimental error, and many values are zero, because of finite detection limits. Many of the clusters show considerable overlap, because of natural variability, agglomeration, and chemical reactivity.


Author(s):  
Mykhajlo Klymash ◽  
Olena Hordiichuk — Bublivska ◽  
Ihor Tchaikovskyi ◽  
Oksana Urikova

In this article investigated the features of processing large arrays of information for distributed systems. A method of singular data decomposition is used to reduce the amount of data processed, eliminating redundancy. Dependencies of com­putational efficiency on distributed systems were obtained using the MPI messa­ging protocol and MapReduce node interaction software model. Were analyzed the effici­ency of the application of each technology for the processing of different sizes of data: Non — distributed systems are inefficient for large volumes of information due to low computing performance. It is proposed to use distributed systems that use the method of singular data decomposition, which will reduce the amount of information processed. The study of systems using the MPI protocol and MapReduce model obtained the dependence of the duration calculations time on the number of processes, which testify to the expediency of using distributed computing when processing large data sets. It is also found that distributed systems using MapReduce model work much more efficiently than MPI, especially with large amounts of data. MPI makes it possible to perform calculations more efficiently for small amounts of information. When increased the data sets, advisable to use the Map Reduce model.


2018 ◽  
Vol 2018 (6) ◽  
pp. 38-39
Author(s):  
Austa Parker ◽  
Yan Qu ◽  
David Hokanson ◽  
Jeff Soller ◽  
Eric Dickenson ◽  
...  

Computers ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 47
Author(s):  
Fariha Iffath ◽  
A. S. M. Kayes ◽  
Md. Tahsin Rahman ◽  
Jannatul Ferdows ◽  
Mohammad Shamsul Arefin ◽  
...  

A programming contest generally involves the host presenting a set of logical and mathematical problems to the contestants. The contestants are required to write computer programs that are capable of solving these problems. An online judge system is used to automate the judging procedure of the programs that are submitted by the users. Online judges are systems designed for the reliable evaluation of the source codes submitted by the users. Traditional online judging platforms are not ideally suitable for programming labs, as they do not support partial scoring and efficient detection of plagiarized codes. When considering this fact, in this paper, we present an online judging framework that is capable of automatic scoring of codes by detecting plagiarized contents and the level of accuracy of codes efficiently. Our system performs the detection of plagiarism by detecting fingerprints of programs and using the fingerprints to compare them instead of using the whole file. We used winnowing to select fingerprints among k-gram hash values of a source code, which was generated by the Rabin–Karp Algorithm. The proposed system is compared with the existing online judging platforms to show the superiority in terms of time efficiency, correctness, and feature availability. In addition, we evaluated our system by using large data sets and comparing the run time with MOSS, which is the widely used plagiarism detection technique.


2021 ◽  
Author(s):  
Věra Kůrková ◽  
Marcello Sanguineti
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document