scholarly journals A note on the implications of factorial invariance for common factor variable equivalence

2016 ◽  
Author(s):  
Michael Maraun ◽  
Moritz Heene

There has come to exist within the psychometric literature a generalized belief to the effect that a determination of the level of factorial invariance that holds over a set of k populations Δj, j = 1..s, is central to ascertaining whether or not the common factor random variables ξj, j = 1..s, are equivalent. In the current manuscript, a technical examination of this belief is undertaken. The chief conclusion of the work is that, as long as technical, statistical senses of random variable equivalence are adhered to, the belief is unfounded.

1979 ◽  
Vol 11 (03) ◽  
pp. 591-602
Author(s):  
David Mannion

We showed in [2] that if an object of initial size x (x large) is subjected to a succession of random partitions, then the object is decomposed into a large number of terminal cells, each of relatively small size, where if Z(x, B) denotes the number of such cells whose sizes are points in the set B, then there exists c, (0 < ≦ 1), such that Z(x, B)x −c converges in probability, as x → ∞, to a random variable W. We show here that if a parent object of size x produces k offspring of sizes y 1, y 2, ···, y k and if for each k x - y 1 - y 2 - ··· - y k (the ‘waste’ or the ‘cover’, depending on the point of view) is relatively small, then for each n the nth cumulant, Ψ n (x, B), of Z(x, B) satisfies Ψ n (x, B)x -c → κ n (B), as x → ∞, for some κ n (B). Thus, writing N = x c , Z(x, B) has approximately the same distribution as the sum of N independent and identically distributed random variables (The determination of the distribution of the individual appears to be a difficult problem.) The theory also applies when an object of moderate size is broken down into very fine particles or granules.


Firstly, this paper establishes K-factor linear model and arbitrage pricing model (ATP) according to ‘the Asset Pricing Model-Arbitrage Pricing Theory’, Then from 2001 to 2017, the Statistical Yearbook of the National Bureau of Statistics collected 10 factors as the original factors such as gross national product, gross industrial product and gross tertiary industry product. After synthesis and simplification, three common factors are extracted to replace ten original factors.The first common factor variable is used to reflect the overall economic level of the country;The second common factor variable reflects a country's inflation rate;The third public factor variable reflects the total annual net export trade situation of the country. After the common factor is determined, the value of the common factor is calculated from the original data.Collect the annual return of 10 stocks for 17 years and do twice random forest regression,we get the arbitrage pricing model. Then, based on the same common factor data, another arbitrage pricing model is obtained by imitating the linear regression method of previous similar papers. By comparing the pricing error, we can find the pricing effect of the model obtained by random forest regression is better than that of the model obtained by linear regression.


2011 ◽  
Vol 28 (1) ◽  
pp. 59
Author(s):  
Charmaine Scrimnger-Christian ◽  
Saratiel Wedzerai Musvoto

<span style="font-family: Times New Roman; font-size: small;"> </span><p style="margin: 0in 0.5in 0pt; text-align: justify; mso-pagination: none;" class="MsoNormal"><span style="color: black; font-size: 10pt; mso-themecolor: text1;"><span style="font-family: Times New Roman;">The concept of value in accounting has been generalized by various authors to a large variety of relations in both accounting and finance. For example, the basis for the preparation of the financial statements in accounting and the foundations for the determination of the return on a security in finance are based on the concept of value measurement. However, there are cases in which applications of the concept of value measurement breaks down, such as in predicting the long-run behavior of accounting and finance phenomena classified as random variables and in applying deterministic models to accounting and finance models. In this study, the principles of probability biclassification and random utility theory are used to rectify the shortcomings of generalizing the concept of value measurement to include activities to understand the long-run behavior of random variables. This study closes with a discussion on the compatibility of the intentionality structure of acts of knowledge in accounting and finance with statistical concepts on random variables.<span style="mso-spacerun: yes;"> </span></span></span></p><span style="font-family: Times New Roman; font-size: small;"> </span>


1966 ◽  
Vol 3 (01) ◽  
pp. 272-273 ◽  
Author(s):  
H. Robbins ◽  
E. Samuel

We define a natural extension of the concept of expectation of a random variable y as follows: M(y) = a if there exists a constant − ∞ ≦ a ≦ ∞ such that if y 1, y 2, … is a sequence of independent identically distributed (i.i.d.) random variables with the common distribution of y then


1966 ◽  
Vol 3 (1) ◽  
pp. 272-273 ◽  
Author(s):  
H. Robbins ◽  
E. Samuel

We define a natural extension of the concept of expectation of a random variable y as follows: M(y) = a if there exists a constant − ∞ ≦ a ≦ ∞ such that if y1, y2, … is a sequence of independent identically distributed (i.i.d.) random variables with the common distribution of y then


1975 ◽  
Vol 7 (4) ◽  
pp. 830-844 ◽  
Author(s):  
Lajos Takács

A sequence of random variables η0, η1, …, ηn, … is defined by the recurrence formula ηn = max (ηn–1 + ξn, 0) where η0 is a discrete random variable taking on non-negative integers only and ξ1, ξ2, … ξn, … is a semi-Markov sequence of discrete random variables taking on integers only. Define Δ as the smallest n = 1, 2, … for which ηn = 0. The random variable ηn can be interpreted as the content of a dam at time t = n(n = 0, 1, 2, …) and Δ as the time of first emptiness. This paper deals with the determination of the distributions of ηn and Δ by using the method of matrix factorisation.


YMER Digital ◽  
2021 ◽  
Vol 20 (11) ◽  
pp. 222-229
Author(s):  
A DEVI ◽  
◽  
B SATHISH KUMAR ◽  

In this paper, the problem of time to recruitment is analyzed for a single grade manpower system using an univariate CUM policy of recruitment. Assuming policy decisions and exits occur at different epochs, wastage of manpower due to exits form a sequence of independent and identically distributed exponential random variables, the inter-decision times form a geometric process and inter-exist time form an independent and identically distributed random variable. The breakdown threshold for the cumulative wastage of manpower in the system has three components which are independent exponential random variables. Employing a different probabilistic analysis, analytical results in closed form for system characteristics are derived


1975 ◽  
Vol 7 (04) ◽  
pp. 830-844
Author(s):  
Lajos Takács

A sequence of random variables η 0, η 1, …, ηn , … is defined by the recurrence formula ηn = max (η n–1 + ξn , 0) where η 0 is a discrete random variable taking on non-negative integers only and ξ 1, ξ 2, … ξn , … is a semi-Markov sequence of discrete random variables taking on integers only. Define Δ as the smallest n = 1, 2, … for which ηn = 0. The random variable ηn can be interpreted as the content of a dam at time t = n(n = 0, 1, 2, …) and Δ as the time of first emptiness. This paper deals with the determination of the distributions of ηn and Δ by using the method of matrix factorisation.


1979 ◽  
Vol 11 (3) ◽  
pp. 591-602
Author(s):  
David Mannion

We showed in [2] that if an object of initial size x (x large) is subjected to a succession of random partitions, then the object is decomposed into a large number of terminal cells, each of relatively small size, where if Z(x, B) denotes the number of such cells whose sizes are points in the set B, then there exists c, (0 < ≦ 1), such that Z(x, B)x−c converges in probability, as x → ∞, to a random variable W. We show here that if a parent object of size x produces k offspring of sizes y1, y2, ···, yk and if for each k x - y1 - y2 - ··· - yk (the ‘waste’ or the ‘cover’, depending on the point of view) is relatively small, then for each n the nth cumulant, Ψn (x, B), of Z(x, B) satisfies Ψn (x, B)x-c → κn (B), as x → ∞, for some κn(B). Thus, writing N = xc, Z(x, B) has approximately the same distribution as the sum of N independent and identically distributed random variables (The determination of the distribution of the individual appears to be a difficult problem.) The theory also applies when an object of moderate size is broken down into very fine particles or granules.


Author(s):  
Sándor Csörgoő ◽  
David M. Mason

AbstractGiven a sequence of non-negative independent and identically distributed random variables, we determine conditions on the common distribution such that the sum of appropriately normalized and centred upper kn extreme values based on the first n random variables converges in distribution to a normal random variable, where kn → ∞ and kn/ n → 0 as n → ∞. The probabilistic problem is motivated by recent statistical work on the estimation of the exponent of a regularly varying distribution function. Our main tool is a new Brownian bridge approximation to the uniform empirical and quantile processes in weighted supremum norms.


Sign in / Sign up

Export Citation Format

Share Document