Does Network Theory Contradict Trait Theory?

2012 ◽  
Vol 26 (4) ◽  
pp. 447-448 ◽  
Author(s):  
Rolf Steyer

I argue that the trait and network theories of personality are not necessarily contradictory. If appropriately formalized, it may turn out that network theory incorporates traits as part of the theory. I object the opinion that if a trait is a cause of behaviour, then it is necessarily an entity operating in the minds of individuals. Finally, I argue that liking parties can be a label for a random variable (item), a stochastic process (a family of items at different time points) and a latent variable (trait). In our colloquial language, we do not make these distinctions, which leads often to confusions. Copyright © 2012 John Wiley & Sons, Ltd.

2016 ◽  
Vol 24 (1) ◽  
pp. 1-16 ◽  
Author(s):  
Peter Jaeger

Summary First we give an implementation in Mizar [2] basic important definitions of stochastic finance, i.e. filtration ([9], pp. 183 and 185), adapted stochastic process ([9], p. 185) and predictable stochastic process ([6], p. 224). Second we give some concrete formalization and verification to real world examples. In article [8] we started to define random variables for a similar presentation to the book [6]. Here we continue this study. Next we define the stochastic process. For further definitions based on stochastic process we implement the definition of filtration. To get a better understanding we give a real world example and connect the statements to the theorems. Other similar examples are given in [10], pp. 143-159 and in [12], pp. 110-124. First we introduce sets which give informations referring to today (Ωnow, Def.6), tomorrow (Ωfut1 , Def.7) and the day after tomorrow (Ωfut2 , Def.8). We give an overview for some events in the σ-algebras Ωnow, Ωfut1 and Ωfut2, see theorems (22) and (23). The given events are necessary for creating our next functions. The implementations take the form of: Ωnow ⊂ Ωfut1 ⊂ Ωfut2 see theorem (24). This tells us growing informations from now to the future 1=now, 2=tomorrow, 3=the day after tomorrow. We install functions f : {1, 2, 3, 4} → ℝ as following: f1 : x → 100, ∀x ∈ dom f, see theorem (36), f2 : x → 80, for x = 1 or x = 2 and f2 : x → 120, for x = 3 or x = 4, see theorem (37), f3 : x → 60, for x = 1, f3 : x → 80, for x = 2 and f3 : x → 100, for x = 3, f3 : x → 120, for x = 4 see theorem (38). These functions are real random variable: f1 over Ωnow, f2 over Ωfut1, f3 over Ωfut2, see theorems (46), (43) and (40). We can prove that these functions can be used for giving an example for an adapted stochastic process. See theorem (49). We want to give an interpretation to these functions: suppose you have an equity A which has now (= w1) the value 100. Tomorrow A changes depending which scenario occurs − e.g. another marketing strategy. In scenario 1 (= w11) it has the value 80, in scenario 2 (= w12) it has the value 120. The day after tomorrow A changes again. In scenario 1 (= w111) it has the value 60, in scenario 2 (= w112) the value 80, in scenario 3 (= w121) the value 100 and in scenario 4 (= w122) it has the value 120. For a visualization refer to the tree: The sets w1,w11,w12,w111,w112,w121,w122 which are subsets of {1, 2, 3, 4}, see (22), tell us which market scenario occurs. The functions tell us the values to the relevant market scenario: For a better understanding of the definition of the random variable and the relation to the functions refer to [7], p. 20. For the proof of certain sets as σ-fields refer to [7], pp. 10-11 and [9], pp. 1-2. This article is the next step to the arbitrage opportunity. If you use for example a simple probability measure, refer, for example to literature [3], pp. 28-34, [6], p. 6 and p. 232 you can calculate whether an arbitrage exists or not. Note, that the example given in literature [3] needs 8 instead of 4 informations as in our model. If we want to code the first 3 given time points into our model we would have the following graph, see theorems (47), (44) and (41): The function for the “Call-Option” is given in literature [3], p. 28. The function is realized in Def.5. As a background, more examples for using the definition of filtration are given in [9], pp. 185-188.


2021 ◽  
Author(s):  
El ghazi Imad

Abstract We prove in this short paper that the stochastic process defined by: $$Y_{t} := \frac{X_{t+1}}{\mathbb{E}\left[ X_{t+1}\right]},\; t\geq a > 1,$$ is an increasing process for the convex order,where Χt a random variable taking values in N with probability P(Χt = n) = n-t/(𝛇(t)) and 𝛇(t) = +∞∑k=1(1/kt), ∀t > 1.


2017 ◽  
Vol 43 (3) ◽  
pp. 259-285 ◽  
Author(s):  
Yang Liu ◽  
Ji Seung Yang

The uncertainty arising from item parameter estimation is often not negligible and must be accounted for when calculating latent variable (LV) scores in item response theory (IRT). It is particularly so when the calibration sample size is limited and/or the calibration IRT model is complex. In the current work, we treat two-stage IRT scoring as a predictive inference problem: The target of prediction is a random variable that follows the true posterior of the LV conditional on the response pattern being scored. Various Bayesian, fiducial, and frequentist prediction intervals of LV scores, which can be obtained from a simple yet generic Monte Carlo recipe, are evaluated and contrasted via simulations based on several measures of prediction quality. An empirical data example is also presented to illustrate the use of candidate methods.


1982 ◽  
Vol 14 (02) ◽  
pp. 257-271 ◽  
Author(s):  
D. J. Daley ◽  
J. Haslett

The stochastic process {Xn } satisfying Xn +1 = max{Yn +1 + αβ Xn , βXn } where {Yn } is a stationary sequence of non-negative random variables and , 0<β <1, can be regarded as a simple thermal energy storage model with controlled input. Attention is mostly confined to the study of μ = EX where the random variable X has the stationary distribution for {Xn }. Even for special cases such as i.i.d. Yn or α = 0, little explicit information appears to be available on the distribution of X or μ . Accordingly, bounding techniques that have been exploited in queueing theory are used to study μ . The various bounds are illustrated numerically in a range of special cases.


1969 ◽  
Vol 6 (02) ◽  
pp. 409-418 ◽  
Author(s):  
Eugene Lukacs

Let X(t) be a stochastic process whose parameter t runs over a finite or infinite n terval T. Let t 1 , t 2 ɛ T, t 1 〈 t2; the random variable X(t 2) – X(t 1) is called the increment of the process X(t) over the interval [t 1, t 2]. A process X(t) is said to be homogeneous if the distribution function of the increment X(t + τ) — X(t) depends only on the length τ of the interval but is independent of the endpoint t. Two intervals are said to be non-overlapping if they have no interior point in common. A process X(t) is called a process with independent increments if the increments over non-overlapping intervals are stochastically independent. A process X(t) is said to be continuous at the point t if plimτ→0 [X(t + τ) — X(t)] = 0, that is if for any ε > 0, limτ→0 P(| X(t + τ) — X(t) | > ε) = 0. A process is continuous in an interval [A, B] if it is continuous in every point of [A, B].


1981 ◽  
Vol 18 (01) ◽  
pp. 31-41
Author(s):  
Naftali A. Langberg

A group of n susceptible individuals exposed to a contagious disease is considered. It is assumed that at each instant in time one or more susceptible individuals can contract the disease. The progress of this epidemic is modeled by a stochastic process Xn (t), t in [0,∞) representing the number of infective individuals at time t. It is shown that Xn (t), with the suitable standardization and under a mild condition, converges in distribution as n → ∞to a normal random variable for all t in (0, t 0), where t 0 is an identifiable number.


2014 ◽  
Vol 2014 ◽  
pp. 1-25 ◽  
Author(s):  
M.-C. Casabán ◽  
J.-C. Cortés ◽  
J.-V. Romero ◽  
M.-D. Roselló

Deterministic differential equations are useful tools for mathematical modelling. The consideration of uncertainty into their formulation leads to random differential equations. Solving a random differential equation means computing not only its solution stochastic process but also its main statistical functions such as the expectation and standard deviation. The determination of its first probability density function provides a more complete probabilistic description of the solution stochastic process in each time instant. In this paper, one presents a comprehensive study to determinate the first probability density function to the solution of linear random initial value problems taking advantage of the so-called random variable transformation method. For the sake of clarity, the study has been split into thirteen cases depending on the way that randomness enters into the linear model. In most cases, the analysis includes the specification of the domain of the first probability density function of the solution stochastic process whose determination is a delicate issue. A strong point of the study is the presentation of a wide range of examples, at least one of each of the thirteen casuistries, where both standard and nonstandard probabilistic distributions are considered.


1999 ◽  
Vol 36 (1) ◽  
pp. 132-138
Author(s):  
M. P. Quine ◽  
W. Szczotka

We define a stochastic process {Xn} based on partial sums of a sequence of integer-valued random variables (K0,K1,…). The process can be represented as an urn model, which is a natural generalization of a gambling model used in the first published exposition of the criticality theorem of the classical branching process. A special case of the process is also of interest in the context of a self-annihilating branching process. Our main result is that when (K1,K2,…) are independent and identically distributed, with mean a ∊ (1,∞), there exist constants {cn} with cn+1/cn → a as n → ∞ such that Xn/cn converges almost surely to a finite random variable which is positive on the event {Xn ↛ 0}. The result is extended to the case of exchangeable summands.


Sign in / Sign up

Export Citation Format

Share Document