Personality coherence in acts and texts: Searching for coherence within and beyond trait categories

2021 ◽  
pp. 089020702110221
Author(s):  
Mairéad McKenna ◽  
Daniel Cervone ◽  
Aninda Roy ◽  
Candice Burkett

This paper reports two studies that explore complementary aspects of personality coherence. Study 1 addressed cross-situational coherence in contextualized psychological response. Idiographically-tailored methods assessed individuals’ (i) beliefs about their personal attributes, (ii) subjective “mappings” of these attributes to everyday circumstances, and (iii) self-reported contextualized action tendencies. A novel index of idiographic–nomothetic relations gauged the degree to which the idiographic methods yield unique information. Participants’ mappings commonly deviated from the structure of nomothetic trait categories; people often grouped together contextualized action tendencies traditionally associated with different trait categories. The idiographic mappings predicted cross-situational coherence in action tendencies. Study 2 asked whether the contextualization of personal qualities would be evident when people merely are asked to describe their personal attributes in natural language. Participants wrote narratives describing positive and negative qualities. Narratives were coded for the presence of three linguistic features: conditional statements, probabilistic statements, and personality trait inconsistencies. All three occurred frequently. Furthermore, they co-occurred; among participants who described trait-inconsistent attributes, the large majority spontaneously cited conditions in which these attributes are manifested. People who recognize that they possess inconsistent personal qualities may nonetheless attain a coherent understanding of themselves by spontaneously developing a contextually-embedded sense of self.

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Leilei Kong ◽  
Zhongyuan Han ◽  
Yong Han ◽  
Haoliang Qi

Paraphrase identification is central to many natural language applications. Based on the insight that a successful paraphrase identification model needs to adequately capture the semantics of the language objects as well as their interactions, we present a deep paraphrase identification model interacting semantics with syntax (DPIM-ISS) for paraphrase identification. DPIM-ISS introduces the linguistic features manifested in syntactic features to produce more explicit structures and encodes the semantic representation of sentence on different syntactic structures by means of interacting semantics with syntax. Then, DPIM-ISS learns the paraphrase pattern from this representation interacting the semantics with syntax by exploiting a convolutional neural network with convolution-pooling structure. Experiments are conducted on the corpus of Microsoft Research Paraphrase (MSRP), PAN 2010 corpus, and PAN 2012 corpus for paraphrase plagiarism detection. The experimental results demonstrate that DPIM-ISS outperforms the classical word-matching approaches, the syntax-similarity approaches, the convolution neural network-based models, and some deep paraphrase identification models.


2021 ◽  
Author(s):  
KOUSHIK DEB

Character Computing consists of not only personality trait recognition, but also correlation among these traits. Tons of research has been conducted in this area. Various factors like demographics, sentiment, gender, LIWC, and others have been taken into account in order to understand human personality. In this paper, we have concentrated on the factors that could be obtained from available data using Natural Language Processing. It has been observed that the most successful personality trait prediction models are highly dependent on NLP techniques. Researchers across the globe have used different kinds of machine learning and deep learning techniques to automate this process. Different combinations of factors lead the research in different directions. We have presented a comparative study among those experiments and tried to derive a direction for future development.


Author(s):  
Kyungbin Kwon

Understanding the misconception of students is critical in that it identifies the reasons of errors students make and allows instructors to design instructions accordingly. This study investigated the mental models of programming concepts held by pre-service teachers who were novice in programming. In an introductory programming course, students were asked to solve problems that could be solved by utilizing conditional statements. They developed solution plans pseudo-code including a simplified natural language, symbols, diagrams, and so on. Sixteen solution plans of three different types of problems were analyzed. As a result, the students’ egocentric and insufficient programming concepts were identified in terms of the misuse of variables, redundancy of codes, and weak strategic knowledge. The results revealed that the students had difficulty designing solution plans that could be executed by computers. They needed instructional supports to master how to express their solution plans in the way computers run. Problem driven instructional designs for novice students were discussed.


2022 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Krishnadas Nanath ◽  
Supriya Kaitheri ◽  
Sonia Malik ◽  
Shahid Mustafa

Purpose The purpose of this paper is to examine the factors that significantly affect the prediction of fake news from the virality theory perspective. The paper looks at a mix of emotion-driven content, sentimental resonance, topic modeling and linguistic features of news articles to predict the probability of fake news. Design/methodology/approach A data set of over 12,000 articles was chosen to develop a model for fake news detection. Machine learning algorithms and natural language processing techniques were used to handle big data with efficiency. Lexicon-based emotion analysis provided eight kinds of emotions used in the article text. The cluster of topics was extracted using topic modeling (five topics), while sentiment analysis provided the resonance between the title and the text. Linguistic features were added to the coding outcomes to develop a logistic regression predictive model for testing the significant variables. Other machine learning algorithms were also executed and compared. Findings The results revealed that positive emotions in a text lower the probability of news being fake. It was also found that sensational content like illegal activities and crime-related content were associated with fake news. The news title and the text exhibiting similar sentiments were found to be having lower chances of being fake. News titles with more words and content with fewer words were found to impact fake news detection significantly. Practical implications Several systems and social media platforms today are trying to implement fake news detection methods to filter the content. This research provides exciting parameters from a viral theory perspective that could help develop automated fake news detectors. Originality/value While several studies have explored fake news detection, this study uses a new perspective on viral theory. It also introduces new parameters like sentimental resonance that could help predict fake news. This study deals with an extensive data set and uses advanced natural language processing to automate the coding techniques in developing the prediction model.


2021 ◽  
Vol 9 (2) ◽  
pp. 227-240
Author(s):  
Tazanfal Tehseem ◽  
Summaya Afzal ◽  
Sanam Abbas

This paper aims at exploring socio-cultural stance and perspective in writing practices for condolence emails. The social purpose of condolence emails is to express deep sadness on the passing souls. Therefore, such texts note life stories and commemorate inspirations of the deceased both famous and infamous so account for the genre analysis (Christie & Martin, 1997). Since personal emails are written by the individuals concerned so necessarily outline significant cultural elements. The study builds on the topological genre analysis (James R Martin & Rose, 2008) of the condolence emails mainly looking into staging (sequential and ascriptional) and describing linguistic features (Halliday & Matthiessen, 2014). The analysis shows significant socio-cultural variations in writing condolence emails. Therefore, genre features of the selected texts reveal that differences in perspectives and stance in constructing such texts are mainly attributes of the socio-cultural distinction’s peculiar to different dominant cultures. For example, texts from the European cultures highlight the deceased’s professional achievements then services to the wider community; while emails from the Asian cultures construe interpersonal relationships in the orientation stage then append personal attributes of the deceased followed by the professional services rendered to a wider SFL community. The classified data is obtained from the sys-func and sysfling mailing list archives and has been anonymized to secrete identity.


1995 ◽  
Vol 23 (4) ◽  
pp. 320-330 ◽  
Author(s):  
Lawrence O. Gostin

Human genomic information is invested with enormous power in a scientifically motivated society. Genomic information has the capacity to produce a great deal of good for society. It can help identify and understand the etiology and pathophysiology of disease. In so doing, medicine and science can expand the ability to prevent and ameliorate human malady through genetic testing, treatment, and reproductive counseling.Genomic information can just as powerfully serve less beneficent ends. Information can be used to discover deeply personal attributes of an individual's life. That information can be used to invade a person's private sphere, to alter a person's sense of self- and family identity, and to affect adversely opportunities in education, employment, and insurance. Genomic information can also affect families and ethnic groups that share genetic similarities.


2021 ◽  
Vol 13 ◽  
Author(s):  
Aparna Balagopalan ◽  
Benjamin Eyre ◽  
Jessica Robin ◽  
Frank Rudzicz ◽  
Jekaterina Novikova

Introduction: Research related to the automatic detection of Alzheimer's disease (AD) is important, given the high prevalence of AD and the high cost of traditional diagnostic methods. Since AD significantly affects the content and acoustics of spontaneous speech, natural language processing, and machine learning provide promising techniques for reliably detecting AD. There has been a recent proliferation of classification models for AD, but these vary in the datasets used, model types and training and testing paradigms. In this study, we compare and contrast the performance of two common approaches for automatic AD detection from speech on the same, well-matched dataset, to determine the advantages of using domain knowledge vs. pre-trained transfer models.Methods: Audio recordings and corresponding manually-transcribed speech transcripts of a picture description task administered to 156 demographically matched older adults, 78 with Alzheimer's Disease (AD) and 78 cognitively intact (healthy) were classified using machine learning and natural language processing as “AD” or “non-AD.” The audio was acoustically-enhanced, and post-processed to improve quality of the speech recording as well control for variation caused by recording conditions. Two approaches were used for classification of these speech samples: (1) using domain knowledge: extracting an extensive set of clinically relevant linguistic and acoustic features derived from speech and transcripts based on prior literature, and (2) using transfer-learning and leveraging large pre-trained machine learning models: using transcript-representations that are automatically derived from state-of-the-art pre-trained language models, by fine-tuning Bidirectional Encoder Representations from Transformer (BERT)-based sequence classification models.Results: We compared the utility of speech transcript representations obtained from recent natural language processing models (i.e., BERT) to more clinically-interpretable language feature-based methods. Both the feature-based approaches and fine-tuned BERT models significantly outperformed the baseline linguistic model using a small set of linguistic features, demonstrating the importance of extensive linguistic information for detecting cognitive impairments relating to AD. We observed that fine-tuned BERT models numerically outperformed feature-based approaches on the AD detection task, but the difference was not statistically significant. Our main contribution is the observation that when tested on the same, demographically balanced dataset and tested on independent, unseen data, both domain knowledge and pretrained linguistic models have good predictive performance for detecting AD based on speech. It is notable that linguistic information alone is capable of achieving comparable, and even numerically better, performance than models including both acoustic and linguistic features here. We also try to shed light on the inner workings of the more black-box natural language processing model by performing an interpretability analysis, and find that attention weights reveal interesting patterns such as higher attribution to more important information content units in the picture description task, as well as pauses and filler words.Conclusion: This approach supports the value of well-performing machine learning and linguistically-focussed processing techniques to detect AD from speech and highlights the need to compare model performance on carefully balanced datasets, using consistent same training parameters and independent test datasets in order to determine the best performing predictive model.


Semiotica ◽  
2016 ◽  
Vol 2016 (209) ◽  
pp. 323-340 ◽  
Author(s):  
Jian Li ◽  
Le Cheng ◽  
Winnie Cheng

AbstractModality and negation, as two important linguistic features used to realise subjectivity, have been investigated within various disciplines, such as logic, linguistics and philosophy, and law. The interaction between modality and negation, as a relatively new and undeveloped domain, has however not been paid due attention in scholarship. This corpus-based study investigates three aspects of their interaction: the differentiation of the deontic value by negation, the categorization of deontic modality in Hong Kong legislation via negation, and distribution patterns of deontic modality, especially distribution patterns of the negation of modality, in Hong Kong legislation. This study shows that negation is a powerful linguistic mechanism not only for determining the nature and functions of modality, but also for determining the value of modality. This study also reveals that negation helps us to investigate the distribution of deontic modality in Hong Kong legislation and hence revisit the legal framework in Hong Kong. A study taking into account the discursive and professional aspects of the interaction between deontic modality and negation will provide a theoretical basis for the natural language processing of modality and negation in legislation and also offer important implications for the study of negation and modality in general contexts.


2010 ◽  
Vol 31 (3) ◽  
pp. 439-462 ◽  
Author(s):  
NICHOLAS D. DURAN ◽  
CHARLES HALL ◽  
PHILIP M. MCCARTHY ◽  
DANIELLE S. MCNAMARA

ABSTRACTThe words people use and the way they use them can reveal a great deal about their mental states when they attempt to deceive. The challenge for researchers is how to reliably distinguish the linguistic features that characterize these hidden states. In this study, we use a natural language processing tool called Coh-Metrix to evaluate deceptive and truthful conversations that occur within a context of computer-mediated communication. Coh-Metrix is unique in that it tracks linguistic features based on cognitive and social factors that are hypothesized to influence deception. The results from Coh-Metrix are compared to linguistic features reported in previous independent research, which used a natural language processing tool called Linguistic Inquiry and Word Count. The comparison reveals converging and contrasting alignment for several linguistic features and establishes new insights on deceptive language and its use in conversation.


Author(s):  
Abbylolita Sullah ◽  
Chee Hian Tan

Personality has a great effect on performance and coach-athlete relationship in a team. Sports scientist asserts that a lack of certain personality traits could help to explain “why some individuals gifted at sport do not thrive at elite level.” Therefore, the purpose of this study was to examine any differences of personality traits between coaches and players of Malaysian football teams as well to identify any differences concerning to personality traits among Malaysian successful and less successful football teams. (n =16) coaches and (n = 200) players of the Malaysia Super League and Malaysia Premier League were identified to participate in the modified GEQ (2009) which measured personal attributes and personal qualities. Independent t-test apply and the results indicated that the null hypothesis was rejected with the statistically of n (214); t = 2.441, p = .015; ​​​<.05 and n(214); t = 2.434, p = .020; <0.05. Personal qualities and attributes showed significant high mean value for Malaysian successful football teams n (106); t = 4.947, p = .000; <.05.  This study distinguished personality traits that seem to set apart the successful high-performing coach and athletes. This study has contributed to Coaching Science, the body of knowledge.


Sign in / Sign up

Export Citation Format

Share Document