Computational Approach for Personality Detection on Attributes

Author(s):  
Rohit Rastogi ◽  
Devendra Kumar Chaturvedi ◽  
Mayank Gupta

Psychologists seek to measure personality to analyze the human behavior through a number of methods, which are self-enhancing (humor use to enhance self), affiliative (humor use to enhance the relationship with other), aggressive (humor use to enhance the self at the expense of others), self-defeating (the humor use to enhance relationships at the expense of self). The purpose of this chapter is to enlighten the use of personality detection test in academics, job placement, group-interaction, and self-reflection. This chapter provides the use of multimedia and IoT to detect the personality and to analyze the different human behaviors. It also includes the concept of big data for the storage and processing the data that will be generated while analyzing the personality through IoT. Linear regression and multiple linear regression are proved to be the best, so they can be used to implement the prediction of personality of individuals. Decision tree regression model has achieved minimum accuracy in comparison to others.

2017 ◽  
Vol Volume 113 (Number 1/2) ◽  
Author(s):  
Richard G. Cowden ◽  
◽  

Abstract This study examined the relationship between mental toughness (MT) and self-awareness in a sample of 175 male and 158 female South African tennis athletes (mean age = 29.09 years, s.d. = 14.00). The participants completed the Sport Mental Toughness Questionnaire and the Self-Reflection and Insight Scale to assess MT (confidence, constancy, control) and self-awareness (self-reflection and self-insight) dimensions, respectively. Linear regression indicated that self-insight (β=0.49), but not self-reflection (β=0.02), predicted global MT. Multivariate regression analyses were significant for self-reflection (ηp²=0.11) and self-insight (ηp²=0.24). Self-reflection predicted confidence and constancy (ηp²=0.05 and 0.06, respectively), whereas self-insight predicted all three MT subcomponents (ηp²=0.12 to 0.14). The findings extend prior qualitative research evidence supporting the relevance of self-awareness to the MT of competitive tennis athletes, with self-reflection and insight forming prospective routes through which athletes’ MT may be developed.


Author(s):  
Luz María Hernández-Cruz ◽  
Diana Concepción Mex-Alvarez ◽  
Guadalupe Manuel Estrada-Segovia ◽  
Margarita Castillo-Tellez

Currently, the email is the most used network service as a means of communication for sending and receiving messages and files. The objective of this study is to perform an analysis of institutional emails by applying a strategic that ensures the existence of a bilateral communication between the employees. The research is of applied type, which will allow to predict assertive working groups with prosperous and productive labor relations. The study integrates the application of a Technological Big Data tool called Immersion and the analysis of a Simple Linear Regression (PLS) model using Microsoft Office Excel. The adapted methodology is composed of three phases: first, the "Data Collection" where a large volume of data is collected (personal data) from an institutional email account for the case study, then we have the "Analysis" where a simple linear regression model is constructed to analyze the relationship between the collected data and finally, the "Interpretation" where the obtained results are explained. Having important applications such as the integration of academic group, thematic networks, disciplinary committees or collaborative members in projects.


Author(s):  
Andrew Smith

In ‘Reading the Gothic and Gothic Readers’ Andrew Smith outlines how recent developments in Gothic studies have provided new ways of critically reflecting upon the nineteenth century. Smith then proceeds to explore how readers and reading, as images of self-reflection, are represented in the fin de siècle Gothic. The self-reflexive nature of the late nineteenth-century Gothic demonstrates a level of political and cultural scepticism at work in the period which, Smith argues, can be applied to recent developments in animal studies as a hitherto largely overlooked critical paradigm that can be applied to the Gothic. To that end this chapter examines representations of reading, readers, and implied readers in Arthur Machen’s The Great God Pan (1894), Bram Stoker’s Dracula (1897), and Arthur Conan Doyle’s The Hound of the Baskervilles (1902), focusing on how these representations explore the relationship between the human and the non-human. An extended account of Dracula identifies ways in which these images of self-reflection relate to the presence of the inner animal and more widely the chapter argues for a way of rethinking the period within the context of animal studies via these ostensibly Gothic constructions of human and animal identities.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Thérence Nibareke ◽  
Jalal Laassiri

Abstract Introduction Nowadays large data volumes are daily generated at a high rate. Data from health system, social network, financial, government, marketing, bank transactions as well as the censors and smart devices are increasing. The tools and models have to be optimized. In this paper we applied and compared Machine Learning algorithms (Linear Regression, Naïve bayes, Decision Tree) to predict diabetes. Further more, we performed analytics on flight delays. The main contribution of this paper is to give an overview of Big Data tools and machine learning models. We highlight some metrics that allow us to choose a more accurate model. We predict diabetes disease using three machine learning models and then compared their performance. Further more we analyzed flight delay and produced a dashboard which can help managers of flight companies to have a 360° view of their flights and take strategic decisions. Case description We applied three Machine Learning algorithms for predicting diabetes and we compared the performance to see what model give the best results. We performed analytics on flights datasets to help decision making and predict flight delays. Discussion and evaluation The experiment shows that the Linear Regression, Naive Bayesian and Decision Tree give the same accuracy (0.766) but Decision Tree outperforms the two other models with the greatest score (1) and the smallest error (0). For the flight delays analytics, the model could show for example the airport that recorded the most flight delays. Conclusions Several tools and machine learning models to deal with big data analytics have been discussed in this paper. We concluded that for the same datasets, we have to carefully choose the model to use in prediction. In our future works, we will test different models in other fields (climate, banking, insurance.).


Author(s):  
Paweł Jan Brudek ◽  
Martyna Płudowska ◽  
Stanisława Steuden ◽  
Andrzej Sękowski

AbstractThe goal of the present study was to investigate whether generativity and wisdom played a mediating role in the relationships between gerotranscendence and humor styles among people in late adulthood. The study included 399 participants aged 60–85 years. The following measures were used: Gerotranscendence Scale Type 2 (GST2), Humor Styles Questionnaire (HSQ), Loyola Generativity Scale (LGS), and the Self-Assessed Wisdom Scale (SAWS). The analyses revealed that generativity and wisdom, taken together, were mediators in the relationship between gerotranscendence and the four styles of humor in late adulthood. The results of the survey show that gerotranscendence is a factor that protects individuals against the use of aggressive humor, at the same time predisposing them to using humor that expresses self-acceptance and strengthens interpersonal relations. An interesting result was obtained for self-defeating humor. It was shown that gerotranscendence, by increasing generativity and wisdom, increases the tendency to poke fun at oneself and to talk about one’s own weaknesses or mistakes. This tendency, as suggested by the specific character of the relationships observed, does not have to be non-adaptive. Our results demonstrate that the processes related to the shaping of humor among people in late adulthood have a unique nature.


2021 ◽  
Author(s):  
◽  
Hassan Tariq

<p>There is a huge and rapidly increasing amount of data being generated by social media, mobile applications and sensing devices. Big data is the term usually used to describe such data and is described in terms of the 3Vs - volume, variety and velocity. In order to process and mine such a massive amount of data, several approaches and platforms have been developed such as Hadoop. Hadoop is a popular open source distributed and parallel computing framework. It has a large number of configurable parameters which can be set before the execution of jobs to optimize the resource utilization and execution time of the clusters. These parameters have a significant impact on system resources and execution time. Optimizing the performance of a Hadoop cluster by tuning such a large number of parameters is a tedious task. Most current big data modeling approaches do not include the complex interaction between configuration parameters and the cluster environment changes such as use of different datasets or types of query. This makes it difficult to predict for example the execution time of a job or resource utilization of a cluster. Other attributes include configuration parameters, the structure of query, the dataset, number of nodes and the infrastructure used.  Our first main objective was to design reliable experiments to understand the relationship between attributes. Before designing and implementing the actual experiment we applied Hazard and Operability (HAZOP) analysis to identify operational hazards. These hazards can affect normal working of cluster and execution of Hadoop jobs. This brainstorming activity improved the design and implementation of our experiments by improving the internal validity of the experiments. It also helped us to identify the considerations that must be taken into account for reliable results. After implementing our design, we characterized the relationship between different Hadoop configuration parameters, network and system performance measures.   Our second main objective was to investigate the use of machine learning to model and predict the resource utilization and execution time of Hadoop jobs. Resource utilization and execution time of Hadoop jobs are affected by different attributes such as configuration parameters and structure of query. In order to estimate or predict either qualitatively or quantitatively the level of resource utilization and execution time, it is important to understand the impact of different combinations of these Hadoop job attributes. You could conduct experiments with many different combinations of parameters to uncover this but it is very difficult to run such a large number of jobs with different combinations of Hadoop job attributes and then interpret the data manually. It is very difficult to extract patterns from the data and give a model that can generalize for an unseen scenario. In order to automate the process of data extraction and modeling the complex behavior of different attributes of Hadoop job machine learning was used. Our decision tree based approach enabled us to systematically discover significant patterns in data. Our results showed that the decision tree models constructed for different resources and execution time were informative and robust. They were able to generalize over a wide range of minor and major environmental changes such as change in dataset, cluster size and infrastructure such as Amazon EC2. Moreover, the use of different correlation and regression techniques, such as M5P, Pearson's correlation and k-means clustering, confirmed our findings and provided further insight into the relationship of different attributes and with each other. M5P is a classification and regression technique that predicted the functional relationships among different job attributes. The use of k-means clustering allowed us to see the experimental runs that shows similar resource utilization and execution time. Statistical significance tests, were used to validate the significance of changes in results of different experimental runs, also showed the effectiveness of our resource and performance modelling and prediction method.</p>


2017 ◽  
Vol 3 (1) ◽  
pp. 1-7 ◽  
Author(s):  
Mabel Yu

The investigation of mindfulness has increased significantly over the past decade regarding its efficacy as a clinical tool, particularly in the treatment of depression. Mindfulness is often conceptualized as a mental state characterized by present-moment, non-judgmental attention and awareness. Past researchers have suggested that mindfulness is linked to reduction of self-rumination (i.e. maladaptive self-focused attention to one’s self-worth) through promotion of concrete focus and inhibition of automatic elaboration of intrusive thoughts. Moreover, mindfulness also promotes low-level construal thinking (i.e. concrete thinking) which competes against high-level construal thinking (i.e. abstract thinking). Researchers have proposed that self-rumination involves high-level construal of the self and others, which could increase the likelihood of experiencing negative moods. On the other hand, mindfulness may potentially promote self-reflection (i.e. adaptive self-focused attention to the self) while inhibiting self-rumination. The purpose of this paper is to propose a research idea that will explore the relationship between mindfulness, self-rumination, self-reflection, and depressive symptoms (i.e., low mood, anhedonia or ability to feel pleasure, and changes in sleep). The findings of the proposed research may have significant implications for treatment of depressive symptoms and for promotion of positive outcomes such as mitigation of self-rumination and enhancement of self-reflective processes through potential effects of mindfulness.


2021 ◽  
Vol 3 (1) ◽  
pp. 90-105
Author(s):  
Călin-Ioan Taloș ◽  

How can the human self face the paradigm shifts that posthumanism mobilizes? We intend to answer this question setting out from the concept of Paul Ricoeur’s hermeneutics of the self. According to this hermeneutics, the self is capable of action, self-reflection, called by Ricoeur “attestation”, and of the fact that it can be accompanied by the moral values acquired. We will notice that Ricoeur’s hermeneutics of the self can be reduced to an ontology of the self deeply engraved by three dialectics, self-reflection, identity, and the relationship between the self-as-another and anothers-as-self. We will subsequently infer that the self holds ontologically the condition of being a self constrained by the manner of being as orientation. This condition represents the fundamental limit of consciousness which, consequently, contributes significantly to an epistemlogy of the consciousness that sketches out the project of a hermeneutics which places both language and consciousness at the centre of the process of differentiation between the posthumanist narratives about life and death.


Author(s):  
Michael Maher

The Cognitive-Experiential Tri-Circle is a model developed by the author to explain the relationship between conducting field research and reflecting on beliefs, including spiritual beliefs. His sample included graduate students, faculty, and friends of the university who participated in field research trips to Cuba through Loyola University Chicago. The basic assumption of the model is that "self," "beliefs," and "experience" are related in such a way that "depth" applies to each equally in a field research experience. Depth of experience for the self leads to depth of belief for the self . Reflection tools that encourage depth of belief for the self lead to depth of experience for the self. The author designed a particular method for processing or "reflection" which he used with participants on these trips. He al so discusses at length the philosophical issues involved in this topic. The paper concludes that the processing method was effective and that the model is applicable to field research experiences.


2021 ◽  
Author(s):  
◽  
Hassan Tariq

<p>There is a huge and rapidly increasing amount of data being generated by social media, mobile applications and sensing devices. Big data is the term usually used to describe such data and is described in terms of the 3Vs - volume, variety and velocity. In order to process and mine such a massive amount of data, several approaches and platforms have been developed such as Hadoop. Hadoop is a popular open source distributed and parallel computing framework. It has a large number of configurable parameters which can be set before the execution of jobs to optimize the resource utilization and execution time of the clusters. These parameters have a significant impact on system resources and execution time. Optimizing the performance of a Hadoop cluster by tuning such a large number of parameters is a tedious task. Most current big data modeling approaches do not include the complex interaction between configuration parameters and the cluster environment changes such as use of different datasets or types of query. This makes it difficult to predict for example the execution time of a job or resource utilization of a cluster. Other attributes include configuration parameters, the structure of query, the dataset, number of nodes and the infrastructure used.  Our first main objective was to design reliable experiments to understand the relationship between attributes. Before designing and implementing the actual experiment we applied Hazard and Operability (HAZOP) analysis to identify operational hazards. These hazards can affect normal working of cluster and execution of Hadoop jobs. This brainstorming activity improved the design and implementation of our experiments by improving the internal validity of the experiments. It also helped us to identify the considerations that must be taken into account for reliable results. After implementing our design, we characterized the relationship between different Hadoop configuration parameters, network and system performance measures.   Our second main objective was to investigate the use of machine learning to model and predict the resource utilization and execution time of Hadoop jobs. Resource utilization and execution time of Hadoop jobs are affected by different attributes such as configuration parameters and structure of query. In order to estimate or predict either qualitatively or quantitatively the level of resource utilization and execution time, it is important to understand the impact of different combinations of these Hadoop job attributes. You could conduct experiments with many different combinations of parameters to uncover this but it is very difficult to run such a large number of jobs with different combinations of Hadoop job attributes and then interpret the data manually. It is very difficult to extract patterns from the data and give a model that can generalize for an unseen scenario. In order to automate the process of data extraction and modeling the complex behavior of different attributes of Hadoop job machine learning was used. Our decision tree based approach enabled us to systematically discover significant patterns in data. Our results showed that the decision tree models constructed for different resources and execution time were informative and robust. They were able to generalize over a wide range of minor and major environmental changes such as change in dataset, cluster size and infrastructure such as Amazon EC2. Moreover, the use of different correlation and regression techniques, such as M5P, Pearson's correlation and k-means clustering, confirmed our findings and provided further insight into the relationship of different attributes and with each other. M5P is a classification and regression technique that predicted the functional relationships among different job attributes. The use of k-means clustering allowed us to see the experimental runs that shows similar resource utilization and execution time. Statistical significance tests, were used to validate the significance of changes in results of different experimental runs, also showed the effectiveness of our resource and performance modelling and prediction method.</p>


Sign in / Sign up

Export Citation Format

Share Document