Can Network Models Represent Personality Structure and Processes Better than Trait Models Do?

2012 ◽  
Vol 26 (4) ◽  
pp. 444-445 ◽  
Author(s):  
Tobias Rothmund ◽  
Anna Baumert ◽  
Manfred Schmitt

We argue that replacing the trait model with the network model proposed in the target article would be immature for three reasons. (i) If properly specified and grounded in substantive theories, the classic state–trait model provides a flexible framework for the description and explanation of person × situation transactions. (ii) Without additional substantive theories, the network model cannot guide the identification of personality components. (iii) Without assumptions about psychological processes that account for causal links among personality components, the concept of equilibrium has merely descriptive value and lacks explanatory power. Copyright © 2012 John Wiley & Sons, Ltd.

Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5060
Author(s):  
Liu ◽  
Cheng ◽  
Du ◽  
Luo ◽  
Zhang ◽  
...  

Smoke detection technology based on computer vision is a popular research direction in fire detection. This technology is widely used in outdoor fire detection fields (e.g., forest fire detection). Smoke detection is often based on features such as color, shape, texture, and motion to distinguish between smoke and non-smoke objects. However, the salience and robustness of these features are insufficiently strong, resulting in low smoke detection performance under complex environment. Deep learning technology has improved smoke detection performance to a certain degree, but extracting smoke detail features is difficult when the number of network layers is small. With no effective use of smoke motion characteristics, indicators such as false alarm rate are high in video smoke detection. To enhance the detection performance of smoke objects in videos, this paper proposes a concept of change-cumulative image by converting the YUV color space of multi-frame video images into a change-cumulative image, which can represent the motion and color-change characteristics of smoke. Then, a fusion deep network is designed, which increases the depth of the VGG16 network by arranging two convolutional layers after each of its convolutional layer. The VGG16 and Resnet50 (Deep residual network) network models are also arranged using the fusion deep network to improve feature expression ability while increasing the depth of the whole network. Doing so can help extract additional discriminating characteristics of smoke. Experimental results show that by using the change-cumulative image as the input image of the deep network model, smoke detection performance is superior to the classic RGB input image; the smoke detection performance of the fusion deep network model is better than that of the single VGG16 and Resnet50 network models; the smoke detection accuracy, false positive rate, and false alarm rate of this method are better than those of the current popular methods of video smoke detection.


2020 ◽  
Author(s):  
Alexander P. Christensen ◽  
Hudson Golino

The nature of associations between variables is important for constructing theory about psychological phenomena. In the last decade, this topic has received renewed interested with the introduction of psychometric network models. In psychology, these models are often contrasted with latent variable (e.g., factor) models. Recent research has shown that differences between the two tend to be more substantive than statistical. One recently developed algorithm called the Loadings Comparison Test (LCT) was developed to predict whether data were generated from a random, factor, or network model. A significant limitation of current LCT implementation is that it’s based on heuristics that were derived from descriptive statistics. In the present study, we used artificial neural networks to replace these heuristics, and develop a more robust and generalizable algorithm. We performed a simulation study that compared neural networks to the original LCT algorithm as well as logistic regression models that were trained on the same data. We found that the neural networks performed as well as or better than both methods, demonstrating generalizablity across data generating models. We echo the call for more formal theories about the relations between variables and discuss the role of the LCT in this process.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Naomi A. Arnold ◽  
Raul J. Mondragón ◽  
Richard G. Clegg

AbstractDiscriminating between competing explanatory models as to which is more likely responsible for the growth of a network is a problem of fundamental importance for network science. The rules governing this growth are attributed to mechanisms such as preferential attachment and triangle closure, with a wealth of explanatory models based on these. These models are deliberately simple, commonly with the network growing according to a constant mechanism for its lifetime, to allow for analytical results. We use a likelihood-based framework on artificial data where the network model changes at a known point in time and demonstrate that we can recover the change point from analysis of the network. We then use real datasets and demonstrate how our framework can show the changing importance of network growth mechanisms over time.


2021 ◽  
Vol 14 (3) ◽  
pp. 96
Author(s):  
Nina Ryan ◽  
Xinfeng Ruan ◽  
Jin E. Zhang ◽  
Jing A. Zhang

In this paper, we test the applicability of different Fama–French (FF) factor models in Vietnam, we investigate the value factor redundancy and examine the choice of the profitability factor. Our empirical evidence shows that the FF five-factor model has more explanatory power than the FF three-factor model. The value factor remains important after the inclusion of profitability and investment factors. Operating profitability performs better than cash and return-on-equity (ROE) profitability as a proxy for the profitability factor in FF factor modeling. The value factor and operating profitability have the biggest marginal contribution to a maximum squared Sharpe ratio for the five-factor model factors, highlighting the value factor (HML) non-redundancy in describing stock returns in Vietnam.


2000 ◽  
Vol 78 (2) ◽  
pp. 320-326 ◽  
Author(s):  
Frank AM Tuyttens

The algebraic relationships, underlying assumptions, and performance of the recently proposed closed-subpopulation method are compared with those of other commonly used methods for estimating the size of animal populations from mark-recapture records. In its basic format the closed-subpopulation method is similar to the Manly-Parr method and less restrictive than the Jolly-Seber method. Computer simulations indicate that the accuracy and precision of the population estimators generated by the basic closed-subpopulation method are almost comparable to those generated by the Jolly-Seber method, and generally better than those of the minimum-number-alive method. The performance of all these methods depends on the capture probability, the number of previous and subsequent trapping occasions, and whether the population is demographically closed or open. Violation of the assumption of equal catchability causes a negative bias that is more pronounced for the closed-subpopulation and Jolly-Seber estimators than for the minimum-number-alive. The closed-subpopulation method provides a simple and flexible framework for illustrating that the precision and accuracy of population-size estimates can be improved by incorporating evidence, other than mark-recapture data, of the presence of recognisable individuals in the population (from radiotelemetry, mortality records, or sightings, for example) and by exploiting specific characteristics of the population concerned.


2011 ◽  
Vol 105 (2) ◽  
pp. 757-778 ◽  
Author(s):  
Malte J. Rasch ◽  
Klaus Schuch ◽  
Nikos K. Logothetis ◽  
Wolfgang Maass

A major goal of computational neuroscience is the creation of computer models for cortical areas whose response to sensory stimuli resembles that of cortical areas in vivo in important aspects. It is seldom considered whether the simulated spiking activity is realistic (in a statistical sense) in response to natural stimuli. Because certain statistical properties of spike responses were suggested to facilitate computations in the cortex, acquiring a realistic firing regimen in cortical network models might be a prerequisite for analyzing their computational functions. We present a characterization and comparison of the statistical response properties of the primary visual cortex (V1) in vivo and in silico in response to natural stimuli. We recorded from multiple electrodes in area V1 of 4 macaque monkeys and developed a large state-of-the-art network model for a 5 × 5-mm patch of V1 composed of 35,000 neurons and 3.9 million synapses that integrates previously published anatomical and physiological details. By quantitative comparison of the model response to the “statistical fingerprint” of responses in vivo, we find that our model for a patch of V1 responds to the same movie in a way which matches the statistical structure of the recorded data surprisingly well. The deviation between the firing regimen of the model and the in vivo data are on the same level as deviations among monkeys and sessions. This suggests that, despite strong simplifications and abstractions of cortical network models, they are nevertheless capable of generating realistic spiking activity. To reach a realistic firing state, it was not only necessary to include both N -methyl-d-aspartate and GABAB synaptic conductances in our model, but also to markedly increase the strength of excitatory synapses onto inhibitory neurons (>2-fold) in comparison to literature values, hinting at the importance to carefully adjust the effect of inhibition for achieving realistic dynamics in current network models.


2019 ◽  
Author(s):  
Tim Vantilborgh

This chapter introduces the individual Psychological Contract (iPC) network model as an alternative approach to study psychological contracts. This model departs from the basic idea that a psychological contract forms a mental schema containing obligated inducements and contributions, which are exchanged for each other. This mental schema is captured by a dynamic network, in which the nodes represent the inducements and contributions and the ties represent the exchanges. Building on dynamic systems theory, I propose that these networks evolve over time towards attractor states, both at the level of the network structure and at the level of the nodes (i.e., breach and fulfilment attractor states). I highlight how the iPC-network model integrates recent theoretical developments in the psychological contract literature and explain how it may advance scholars understanding of exchange relationships. In particular, I illustrate how iPC-network models allow researchers to study the actual exchanges in the psychological contract over time, while acknowledging its idiosyncratic nature. This would allow for more precise predictions of psychological contract breach and fulfilment consequences and explains how content and process of the psychological contract continuously influence each other.


Author(s):  
Soha Abd Mohamed El-Moamen ◽  
Marghany Hassan Mohamed ◽  
Mohammed F. Farghally

The need for tracking and evaluation of patients in real-time has contributed to an increase in knowing people’s actions to enhance care facilities. Deep learning is good at both a rapid pace in collecting frameworks of big data healthcare and good predictions for detection the lung cancer early. In this paper, we proposed a constructive deep neural network with Apache Spark to classify images and levels of lung cancer. We developed a binary classification model using threshold technique classifying nodules to benign or malignant. At the proposed framework, the neural network models training, defined using the Keras API, is performed using BigDL in a distributed Spark clusters. The proposed algorithm has metrics AUC-0.9810, a misclassifying rate from which it has been shown that our suggested classifiers perform better than other classifiers.


2020 ◽  
Author(s):  
Oksana Sorokina ◽  
Colin Mclean ◽  
Mike DR Croning ◽  
Katharina F Heil ◽  
Emilia Wysocka ◽  
...  

AbstractSynapses contain highly complex proteomes which control synaptic transmission, cognition and behaviour. Genes encoding synaptic proteins are associated with neuronal disorders many of which show clinical co-morbidity. Our hypothesis is that there is mechanistic overlap that is emergent from the network properties of the molecular complex. To test this requires a detailed and comprehensive molecular network model.We integrated 57 published synaptic proteomic datasets obtained between 2000 and 2019 that describe over 7000 proteins. The complexity of the postsynaptic proteome is reaching an asymptote with a core set of ~3000 proteins, with less data on the presynaptic terminal, where each new study reveals new components in its landscape. To complete the network, we added direct protein-protein interaction data and functional metadata including disease association.The resulting amalgamated molecular interaction network model is embedded into a SQLite database. The database is highly flexible allowing the widest range of queries to derive custom network models based on meta-data including species, disease association, synaptic compartment, brain region, and method of extraction.This network model enables us to perform in-depth analyses that dissect molecular pathways of multiple diseases revealing shared and unique protein components. We can clearly identify common and unique molecular profiles for co-morbid neurological disorders such as Schizophrenia and Bipolar Disorder and even disease comorbidities which span biological systems such as the intersection of Alzheimer’s Disease with Hypertension.


2020 ◽  
Author(s):  
Adela-Maria Isvoranu ◽  
Sacha Epskamp ◽  
Mike W.-L. Cheung

Post-traumatic stress disorder (PTSD) researchers have increasingly used psychological network models to investigate PTSD symptom interactions, as well as to identify central driver symptoms. It is unclear, however, how generalizable such results are. We have developed a meta-analytic framework for aggregating network studies while taking between-study heterogeneity into account, and applied this framework to the first-ever meta-analytic study of network models. We analyzed the correlational structures of 52 different samples with a total sample size of n = 29,561, and estimated a single pooled network model underlying the datasets, investigated the scope of between-study heterogeneity, and assessed the performance of network models estimated from single studies. Our main findings are that: (1) While several clear symptom-links and interpretable clusters can be identified in the network, most symptoms feature very similar levels of centrality. To this end, aiming to identify central symptoms in PTSD symptom networks may not be fruitful. (2) We identified large between-study heterogeneity, indicating that it should be expected for networks of single studies to not perfectly align with one-another, and meta-analytic approaches are vital for the study of PTSD networks. (3) Nonetheless, we found evidence that networks estimated from single studies may give rise to generalizable results, as our results aligned with previous descriptive analyses of reported network studies, and network models estimated from single samples lead to similar network structures as the pooled network model. We discuss the implications of these findings for both the PTSD literature as well as methodological literature on network psychometrics.


Sign in / Sign up

Export Citation Format

Share Document