massive sample
Recently Published Documents


TOTAL DOCUMENTS

16
(FIVE YEARS 5)

H-INDEX

5
(FIVE YEARS 2)

2021 ◽  
Author(s):  
E. Grant Baldwin

In recent decades, reform movements have lobbied to remove at-large elections from local governing bodies and replace them with elections by district—in which a city’s electorate is divided into geographic regions that each elect their own council member. Prior social science research has somewhat concluded that in most cases, district elections more reliably elect non- white city councilors than at-large elections. However, these studies are limited by their use of small samples of municipalities, usually only the largest ones (pop. > 25,000) or those from a single state. I hope to overcome this limitation by employing a massive sample of more than 15,000 municipal governments across 49 states. My findings are consistent with and build upon previous research in that I conclude that as the proportion of non-white residents within a city’s population increases, district elections are predicted to elect higher proportions of non-white council members than wholly at-large elections.


2021 ◽  
Vol 5 (2) ◽  
pp. 24-31
Author(s):  
Sami A. Obed ◽  
Parzhin A. Mohammed ◽  
Dler H. Kadir

It is described how the Nelson–Aalen estimator may be used to control the rate of a nonparametric estimate of the cumulative hazard rate function based on right censored as well as left condensed survival data, furthermore how the Nelson–Aalen estimator can be utilized to estimate various amounts. This technique is mostly applied to survival data and product quality data similar to the incorporated relative mortality in a multiplicative model with outer rates and the cumulative infection rate in a straightforward epidemic model. It is shown that tallying measures produce a structure that permits to a brought together treatment of all these different conditions, and the main little and massive sample properties of the assessor are summarized. This estimator is a weighted average of the Nelson-Aalen reliability estimates over two time periods. The suggested estimator's suitability and utility in model selection are reviewed. And a real-world dataset is evaluated to demonstrate the proposed estimator's suitability and utility. This work proposes a simple and nearly unbiased estimator to fill this gap. The information was gathered from the Ministry of Health's website between October 1, 2020, and February 28, 2021. The results of the Nelson Allen Estimator demonstrated that the odds of surviving were higher during a short period of time after being exposed to the virus. As time passes, the possibilities become slimmer. The closer the estimate comes to value 1 from 0.5 upward, the greater the chances of surviving the infection.


2018 ◽  
Vol 45 (5) ◽  
pp. 705-709 ◽  
Author(s):  
Sergio Da Silva ◽  
Marcelo Perlin ◽  
Raul Matsushita ◽  
André AP Santos ◽  
Takeyoshi Imasato ◽  
...  

Lotka’s law is a power law for the frequency of scholarly publications. We show that Lotka’s law cannot be dismissed after considering a massive sample of the number of publications of Brazilian researchers in journals listed on the SCImago Journal Rank and the Journal Citation Reports. For the SCImago Journal Rank, we found a power law with the Pareto exponent of 0.4 beyond the threshold of 50 papers. This means computing the ‘average number of publications’ of either a researcher or a discipline is of no practical significance.


2017 ◽  
Vol 44 (5) ◽  
pp. 1157-1173 ◽  
Author(s):  
Tom Meyvis ◽  
Stijn M J Van Osselaer

Abstract As in other social sciences, published findings in consumer research tend to overestimate the size of the effect being investigated, due to both file drawer effects and abuse of researcher degrees of freedom, including opportunistic analysis decisions. Given that most effect sizes are substantially smaller than would be apparent from published research, there has been a widespread call to increase power by increasing sample size. We propose that, aside from increasing sample size, researchers can also increase power by boosting the effect size. If done correctly, removing participants, using covariates, and optimizing experimental designs, stimuli, and measures can boost effect size without inflating researcher degrees of freedom. In fact, careful planning of studies and analyses to maximize effect size is essential to be able to study many psychologically interesting phenomena when massive sample sizes are not feasible.


2016 ◽  
Author(s):  
Debarka Sengupta ◽  
Nirmala Arul Rayan ◽  
Michelle Lim ◽  
Bing Lim ◽  
Shyam Prabhakar

ABSTRACTAnalysis of single-cell RNA-seq data is challenging due to technical variability, high noise levels and massive sample sizes. Here, we describe a normalization technique that substantially reduces technical variability and improves the quality of downstream analyses. We also introduce a nonparametric method for detecting differentially expressed genes that scales to > 1,000 cells and is both more accurate and ~10 times faster than existing parametric approaches.


2013 ◽  
Vol 9 (S304) ◽  
pp. 70-70
Author(s):  
Bob Becker

AbstractThe final 800 sq. deg of sky covered by FIRST was observed with the new, improved JVLA. The data were split between two bandpasses at 1335 and 1730 MHz and included all four Stokes parameters, thus allowing both spectral and polarimetric results. The lower frequency bandpass data were considered part of FIRST and are available through the FIRST website (http://sundog.stsci.edu/). Here we present the higher frequency bandpass data as pertain to AGN. Foremost, we present spectral index results for the 5000 quasars with spectroscopic redshifts and the 50,000 quasars with photometric redshifts that fall in the survey area. The spectral indices are analyzed as a function of redshift and optical properties both for quasars detected above the 1 mJy limit and, via image stacking, for quasars at flux densities down to 10 mJy.


Sign in / Sign up

Export Citation Format

Share Document