Secondary Sex Ratio in Multiple Births

2010 ◽  
Vol 13 (1) ◽  
pp. 101-108 ◽  
Author(s):  
Johan Fellman ◽  
Aldur W. Eriksson

AbstractAttempts have been made to identify factors influencing the sex ratio at birth (number of males per 100 females). Statistical analyses have shown that comparisons between sex ratios demand large data sets. The secondary sex ratio has been believed to vary inversely with the frequency of prenatal losses. This hypothesis suggests that the ratio is highest among singletons, medium among twins and lowest among triplets. Birth data in Sweden for the period 1869–2004 showed that among live births the secondary sex ratio was on average 105.9 among singletons, 103.2 among twins and 99.1 among triplets. The secondary sex ratio among stillbirths for both singletons and twins started at a high level, around 130, in the 1860s, but approached live birth values in the 1990s. This trend is associated with the decrease and convergence of stillbirth rates among males and females. For detailed studies, we considered data for Sweden in 1869–1878 and in 1901–1967. Marital status or place of residence (urban or rural) had no marked influence on the secondary sex ratio among twins. For triplets, the sex ratio showed large random fluctuations and was on average low. During the period 1901–1967, 20 quadruplet, two quintuplet and one sextuplet set were registered. The sex ratio was low, around 92.0.

2020 ◽  
Vol 1 (1) ◽  
pp. 31-40
Author(s):  
Hina Afzal ◽  
Arisha Kamran ◽  
Asifa Noreen

The market nowadays, due to the rapid changes happening in the technologies requires a high level of interaction between the educators and the fresher coming to going the market. The demand for IT-related jobs in the market is higher than all other fields, In this paper, we are going to discuss the survival analysis in the market of parallel two programming languages Python and R . Data sets are growing large and the traditional methods are not capable enough of handling the large data sets, therefore, we tried to use the latest data mining techniques through python and R programming language. It took several months of effort to gather such an amount of data and process it with the data mining techniques using python and R but the results showed that both languages have the same rate of growth over the past years.


2021 ◽  
Author(s):  
Stephen Taylor

Molecular biology experiments are generating an unprecedented amount of information from a variety of different experimental modalities. DNA sequencing machines, proteomics mass cytometry and microscopes generate huge amounts of data every day. Not only is the data large, but it is also multidimensional. Understanding trends and getting actionable insights from these data requires techniques that allow comprehension at a high level but also insight into what underlies these trends. Lots of small errors or poor summarization can lead to false results and reproducibility issues in large data sets. Hence it is essential we do not cherry-pick results to suit a hypothesis but instead examine all data and publish accurate insights in a data-driven way. This article will give an overview of some of the problems faced by the researcher in understanding epigenetic changes (which are related to changes in the physical structure of DNA) when presented with raw analysis results using visualization methods. We will also discuss the new challenges faced by using machine learning which can be helped by visualization.


2020 ◽  
pp. 0887302X2093119 ◽  
Author(s):  
Rachel Rose Getman ◽  
Denise Nicole Green ◽  
Kavita Bala ◽  
Utkarsh Mall ◽  
Nehal Rawat ◽  
...  

With the proliferation of digital photographs and the increasing digitization of historical imagery, fashion studies scholars must consider new methods for interpreting large data sets. Computational methods to analyze visual forms of big data have been underway in the field of computer science through computer vision, where computers are trained to “read” images through a process called machine learning. In this study, fashion historians and computer scientists collaborated to explore the practical potential of this emergent method by examining a trend related to one particular fashion item—the baseball cap—across two big data sets—the Vogue Runway database (2000–2018) and the Matzen et al. Streetstyle-27K data set (2013–2016). We illustrate one implementation of high-level concept recognition to map a fashion trend. Tracking trend frequency helps visualize larger patterns and cultural shifts while creating sociohistorical records of aesthetics, which benefits fashion scholars and industry alike.


2014 ◽  
Vol 18 (1) ◽  
pp. 92-99
Author(s):  
Johan Fellman ◽  
Aldur W. Eriksson

We analyzed the effect of total fertility rate (TFR) and crude birth rate (CBR) on the number of males per 100 females at birth, also called the secondary sex ratio (SR), and on the twinning rate (TWR). Earlier studies have noted regional variations in TWR and racial differences in the SR. Statistical analyses have shown that comparisons between SRs demand large data sets because random fluctuations in moderate data are marked. Consequently, reliable results presuppose national birth data. Here, we analyzed historical demographic data and their regional variations between counties in Sweden. We built spatial models for the TFR in 1860 and the CBR in 1751–1870, and as regressors we used geographical coordinates for the provincial capitals of the counties. For both variables, we obtained significant spatial variations, albeit of different patterns and power. The SR among the live-born in 1749–1869 and the TWR in 1751–1860 showed slight spatial variations. The influence of CBR and TFR on the SR and TWR was examined and statistical significant effects were found.


Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Author(s):  
Thomas W. Shattuck ◽  
James R. Anderson ◽  
Neil W. Tindale ◽  
Peter R. Buseck

Individual particle analysis involves the study of tens of thousands of particles using automated scanning electron microscopy and elemental analysis by energy-dispersive, x-ray emission spectroscopy (EDS). EDS produces large data sets that must be analyzed using multi-variate statistical techniques. A complete study uses cluster analysis, discriminant analysis, and factor or principal components analysis (PCA). The three techniques are used in the study of particles sampled during the FeLine cruise to the mid-Pacific ocean in the summer of 1990. The mid-Pacific aerosol provides information on long range particle transport, iron deposition, sea salt ageing, and halogen chemistry.Aerosol particle data sets suffer from a number of difficulties for pattern recognition using cluster analysis. There is a great disparity in the number of observations per cluster and the range of the variables in each cluster. The variables are not normally distributed, they are subject to considerable experimental error, and many values are zero, because of finite detection limits. Many of the clusters show considerable overlap, because of natural variability, agglomeration, and chemical reactivity.


Author(s):  
Mykhajlo Klymash ◽  
Olena Hordiichuk — Bublivska ◽  
Ihor Tchaikovskyi ◽  
Oksana Urikova

In this article investigated the features of processing large arrays of information for distributed systems. A method of singular data decomposition is used to reduce the amount of data processed, eliminating redundancy. Dependencies of com­putational efficiency on distributed systems were obtained using the MPI messa­ging protocol and MapReduce node interaction software model. Were analyzed the effici­ency of the application of each technology for the processing of different sizes of data: Non — distributed systems are inefficient for large volumes of information due to low computing performance. It is proposed to use distributed systems that use the method of singular data decomposition, which will reduce the amount of information processed. The study of systems using the MPI protocol and MapReduce model obtained the dependence of the duration calculations time on the number of processes, which testify to the expediency of using distributed computing when processing large data sets. It is also found that distributed systems using MapReduce model work much more efficiently than MPI, especially with large amounts of data. MPI makes it possible to perform calculations more efficiently for small amounts of information. When increased the data sets, advisable to use the Map Reduce model.


2018 ◽  
Vol 2018 (6) ◽  
pp. 38-39
Author(s):  
Austa Parker ◽  
Yan Qu ◽  
David Hokanson ◽  
Jeff Soller ◽  
Eric Dickenson ◽  
...  

Computers ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 47
Author(s):  
Fariha Iffath ◽  
A. S. M. Kayes ◽  
Md. Tahsin Rahman ◽  
Jannatul Ferdows ◽  
Mohammad Shamsul Arefin ◽  
...  

A programming contest generally involves the host presenting a set of logical and mathematical problems to the contestants. The contestants are required to write computer programs that are capable of solving these problems. An online judge system is used to automate the judging procedure of the programs that are submitted by the users. Online judges are systems designed for the reliable evaluation of the source codes submitted by the users. Traditional online judging platforms are not ideally suitable for programming labs, as they do not support partial scoring and efficient detection of plagiarized codes. When considering this fact, in this paper, we present an online judging framework that is capable of automatic scoring of codes by detecting plagiarized contents and the level of accuracy of codes efficiently. Our system performs the detection of plagiarism by detecting fingerprints of programs and using the fingerprints to compare them instead of using the whole file. We used winnowing to select fingerprints among k-gram hash values of a source code, which was generated by the Rabin–Karp Algorithm. The proposed system is compared with the existing online judging platforms to show the superiority in terms of time efficiency, correctness, and feature availability. In addition, we evaluated our system by using large data sets and comparing the run time with MOSS, which is the widely used plagiarism detection technique.


Sign in / Sign up

Export Citation Format

Share Document