scholarly journals New Approach of Measuring Human Personality Traits Using Ontology-Based Model from Social Media Data

Information ◽  
2021 ◽  
Vol 12 (10) ◽  
pp. 413
Author(s):  
Andry Alamsyah ◽  
Nidya Dudija ◽  
Sri Widiyanesti

Human online activities leave digital traces that provide a perfect opportunity to understand their behavior better. Social media is an excellent place to spark conversations or state opinions. Thus, it generates large-scale textual data. In this paper, we harness those data to support the effort of personality measurement. Our first contribution is to develop the Big Five personality trait-based model to detect human personalities from their textual data in the Indonesian language. The model uses an ontology approach instead of the more famous machine learning model. The former better captures the meaning and intention of phrases and words in the domain of human personality. The legacy and more thorough ways to assess nature are by doing interviews or by giving questionnaires. Still, there are many real-life applications where we need to possess an alternative method, which is cheaper and faster than the legacy methodology to select individuals based on their personality. The second contribution is to support the model implementation by building a personality measurement platform. We use two distinct features for the model: an n-gram sorting algorithm to parse the textual data and a crowdsourcing mechanism that facilitates public involvement contributing to the ontology corpus addition and filtering.

Author(s):  
Sterling E Braun ◽  
Michaela K O’Connor ◽  
Margaret M Hornick ◽  
Melissa E Cullom ◽  
James A Butterworth

Abstract Background Plastic Surgeons and patients increasingly use social media. Despite evidence implicating its importance in Plastic Surgery, the large amount of data has made social media difficult to study. Objectives This study seeks to provide a comprehensive assessment of Plastic Surgery content throughout the world using techniques for analyzing large-scale data. Methods ‘#PlasticSurgery’ was used to search public Instagram posts. Metadata was collected from posts between December 2018 and August 2020. In addition to descriptive analysis, we created two instruments to characterize textual data: a multi-lingual dictionary of procedural hashtags and a rule-based text classification model to categorize the source of the post. Results Plastic Surgery content yielded more than 2 million posts, 369 million likes, and 6 billion views globally over the 21-month study. The United States had the most posts of 182 countries studied (26.8%, 566,206). Various other regions had substantial presence including Istanbul, Turkey, which led all cities (4.8%, 102,208). Our classification model achieved high accuracy (94.9%) and strong agreement with independent raters (κ= 0.88). Providers accounted for 40% of all posts (847,356) and included Physician (28%), Plastic Surgery (9%), Advanced-Practice-Practitioners and Nurses (1.6%), Facial Plastics (1.3%), and Oculoplastics (0.2%). Content between Plastics and non-Plastics groups demonstrated high textual similarity, and only 1.4% of posts had a verified source. Conclusions Plastic Surgery content has immense global reach in social media. Textual similarity between groups coupled with the lack of an effective verification mechanism presents challenges in discerning the source and veracity of information.


2017 ◽  
Vol 26 (05) ◽  
pp. 1760023
Author(s):  
Işil Doğa Yakut Kiliç ◽  
Shimei Pan

Personality prediction based on textual data is one topic gaining attention recently for its potential in large-scale personalized applications such as social media-based marketing. However, when applying this technology in real-world applications, users often encounter situations in which the personality traits derived from different sources (e.g., social media posts versus emails) are inconsistent. Varying results for the same individual renders the technology ineffective and untrustworthy. In this paper, we demonstrate the impact of domain differences in automated text-based personality prediction. We also propose different approaches for domain error correction to meet different needs: (a) single or multi-domain correction and (b) outcome-based or input feature-based error correction. We conduct comprehensive experiments to evaluate the effectiveness of these methods. Our findings demonstrate a significant improvement of prediction accuracy with the proposed methods. (e.g., 20–30% relative error reduction using outcome-based error correction or 48% increase of F1 score using feature-based error correction).


2020 ◽  
Vol 41 (3) ◽  
pp. 124-132
Author(s):  
Marc-André Bédard ◽  
Yann Le Corff

Abstract. This replication and extension of DeYoung, Quilty, Peterson, and Gray’s (2014) study aimed to assess the unique variance of each of the 10 aspects of the Big Five personality traits ( DeYoung, Quilty, & Peterson, 2007 ) associated with intelligence and its dimensions. Personality aspects and intelligence were assessed in a sample of French-Canadian adults from real-life assessment settings ( n = 213). Results showed that the Intellect aspect was independently associated with g, verbal, and nonverbal intelligence while its counterpart Openness was independently related to verbal intelligence only, thus replicating the results of the original study. Independent associations were also found between Withdrawal, Industriousness and Assertiveness aspects and verbal intelligence, as well as between Withdrawal and Politeness aspects and nonverbal intelligence. Possible explanations for these associations are discussed.


2017 ◽  
Vol 5 (1) ◽  
pp. 70-82
Author(s):  
Soumi Paul ◽  
Paola Peretti ◽  
Saroj Kumar Datta

Building customer relationships and customer equity is the prime concern in today’s business decisions. The emergence of internet, especially social media like Facebook and Twitter, changed traditional marketing thought to a great extent. The importance of customer orientation is reflected in the axiom, “The customer is the king”. A good number of organizations are engaging customers in their new product development activities via social media platforms. Co-creation, a new perspective in which customers are active co-creators of the products they buy and use, is currently challenging the traditional paradigm. The concept of co-creation involving the customer’s knowledge, creativity and judgment to generate value is considered not only an upcoming trend that introduces new products or services but also fitting their need and increasing value for money. Knowledge and innovation are inseparable. Knowledge management competencies and capacities are essential to any organization that aspires to be distinguished and innovative. The present work is an attempt to identify the change in value creation procedure along with one area of business, where co-creation can return significant dividends. It is on extending the brand or brand category through brand extension or line extension. This article, through an in depth literature review analysis, identifies the changes in every perspective of this paradigm shift and it presents a conceptual model of company-customer-brand-based co-creation activity via social media. The main objective is offering an agenda for future research of this emerging trend and ensuring the way to move from theory to practice. The paper acts as a proposal; it allows the organization to go for this change in a large scale and obtain early feedback on the idea presented. 


2020 ◽  
Vol 12 (20) ◽  
pp. 8369
Author(s):  
Mohammad Rahimi

In this Opinion, the importance of public awareness to design solutions to mitigate climate change issues is highlighted. A large-scale acknowledgment of the climate change consequences has great potential to build social momentum. Momentum, in turn, builds motivation and demand, which can be leveraged to develop a multi-scale strategy to tackle the issue. The pursuit of public awareness is a valuable addition to the scientific approach to addressing climate change issues. The Opinion is concluded by providing strategies on how to effectively raise public awareness on climate change-related topics through an integrated, well-connected network of mavens (e.g., scientists) and connectors (e.g., social media influencers).


2021 ◽  
Vol 55 (1) ◽  
pp. 1-2
Author(s):  
Bhaskar Mitra

Neural networks with deep architectures have demonstrated significant performance improvements in computer vision, speech recognition, and natural language processing. The challenges in information retrieval (IR), however, are different from these other application areas. A common form of IR involves ranking of documents---or short passages---in response to keyword-based queries. Effective IR systems must deal with query-document vocabulary mismatch problem, by modeling relationships between different query and document terms and how they indicate relevance. Models should also consider lexical matches when the query contains rare terms---such as a person's name or a product model number---not seen during training, and to avoid retrieving semantically related but irrelevant results. In many real-life IR tasks, the retrieval involves extremely large collections---such as the document index of a commercial Web search engine---containing billions of documents. Efficient IR methods should take advantage of specialized IR data structures, such as inverted index, to efficiently retrieve from large collections. Given an information need, the IR system also mediates how much exposure an information artifact receives by deciding whether it should be displayed, and where it should be positioned, among other results. Exposure-aware IR systems may optimize for additional objectives, besides relevance, such as parity of exposure for retrieved items and content publishers. In this thesis, we present novel neural architectures and methods motivated by the specific needs and challenges of IR tasks. We ground our contributions with a detailed survey of the growing body of neural IR literature [Mitra and Craswell, 2018]. Our key contribution towards improving the effectiveness of deep ranking models is developing the Duet principle [Mitra et al., 2017] which emphasizes the importance of incorporating evidence based on both patterns of exact term matches and similarities between learned latent representations of query and document. To efficiently retrieve from large collections, we develop a framework to incorporate query term independence [Mitra et al., 2019] into any arbitrary deep model that enables large-scale precomputation and the use of inverted index for fast retrieval. In the context of stochastic ranking, we further develop optimization strategies for exposure-based objectives [Diaz et al., 2020]. Finally, this dissertation also summarizes our contributions towards benchmarking neural IR models in the presence of large training datasets [Craswell et al., 2019] and explores the application of neural methods to other IR tasks, such as query auto-completion.


2021 ◽  
Vol 7 (2) ◽  
pp. 205630512110249
Author(s):  
Peer Smets ◽  
Younes Younes ◽  
Marinka Dohmen ◽  
Kees Boersma ◽  
Lenie Brouwer

During the 2015 refugee crisis in Europe, temporary refugee shelters arose in the Netherlands to shelter the large influx of asylum seekers. The largest shelter was located in the eastern part of the country. This shelter, where tents housed nearly 3,000 asylum seekers, was managed with a firm top-down approach. However, many residents of the shelter—mainly Syrians and Eritreans—developed horizontal relations with the local receiving society, using social media to establish contact and exchange services and goods. This case study shows how various types of crisis communication played a role and how the different worlds came together. Connectivity is discussed in relation to inclusion, based on resilient (non-)humanitarian approaches that link society with social media. Moreover, we argue that the refugee crisis can be better understood by looking through the lens of connectivity, practices, and migration infrastructure instead of focusing only on state policies.


Author(s):  
Krzysztof Jurczuk ◽  
Marcin Czajkowski ◽  
Marek Kretowski

AbstractThis paper concerns the evolutionary induction of decision trees (DT) for large-scale data. Such a global approach is one of the alternatives to the top-down inducers. It searches for the tree structure and tests simultaneously and thus gives improvements in the prediction and size of resulting classifiers in many situations. However, it is the population-based and iterative approach that can be too computationally demanding to apply for big data mining directly. The paper demonstrates that this barrier can be overcome by smart distributed/parallel processing. Moreover, we ask the question whether the global approach can truly compete with the greedy systems for large-scale data. For this purpose, we propose a novel multi-GPU approach. It incorporates the knowledge of global DT induction and evolutionary algorithm parallelization together with efficient utilization of memory and computing GPU’s resources. The searches for the tree structure and tests are performed simultaneously on a CPU, while the fitness calculations are delegated to GPUs. Data-parallel decomposition strategy and CUDA framework are applied. Experimental validation is performed on both artificial and real-life datasets. In both cases, the obtained acceleration is very satisfactory. The solution is able to process even billions of instances in a few hours on a single workstation equipped with 4 GPUs. The impact of data characteristics (size and dimension) on convergence and speedup of the evolutionary search is also shown. When the number of GPUs grows, nearly linear scalability is observed what suggests that data size boundaries for evolutionary DT mining are fading.


Author(s):  
Gianluca Bardaro ◽  
Alessio Antonini ◽  
Enrico Motta

AbstractOver the last two decades, several deployments of robots for in-house assistance of older adults have been trialled. However, these solutions are mostly prototypes and remain unused in real-life scenarios. In this work, we review the historical and current landscape of the field, to try and understand why robots have yet to succeed as personal assistants in daily life. Our analysis focuses on two complementary aspects: the capabilities of the physical platform and the logic of the deployment. The former analysis shows regularities in hardware configurations and functionalities, leading to the definition of a set of six application-level capabilities (exploration, identification, remote control, communication, manipulation, and digital situatedness). The latter focuses on the impact of robots on the daily life of users and categorises the deployment of robots for healthcare interventions using three types of services: support, mitigation, and response. Our investigation reveals that the value of healthcare interventions is limited by a stagnation of functionalities and a disconnection between the robotic platform and the design of the intervention. To address this issue, we propose a novel co-design toolkit, which uses an ecological framework for robot interventions in the healthcare domain. Our approach connects robot capabilities with known geriatric factors, to create a holistic view encompassing both the physical platform and the logic of the deployment. As a case study-based validation, we discuss the use of the toolkit in the pre-design of the robotic platform for an pilot intervention, part of the EU large-scale pilot of the EU H2020 GATEKEEPER project.


2021 ◽  
pp. 120633122110193
Author(s):  
Max Holleran

Brutalist architecture is an object of fascination on social media that has taken on new popularity in recent years. This article, drawing on 3,000 social media posts in Russian and English, argues that the buildings stand out for their arresting scale and their association with the expanding state in the 1960s and 1970s. In both North Atlantic and Eastern European contexts, the aesthetic was employed in publicly financed urban planning projects, creating imposing concrete structures for universities, libraries, and government offices. While some online social media users associate the style with the overreach of both socialist and capitalist governments, others are more nostalgic. They use Brutalist buildings as a means to start conversations about welfare state goals of social housing, free university, and other services. They also lament that many municipal governments no longer have the capacity or vision to take on large-scale projects of reworking the built environment to meet contemporary challenges.


Sign in / Sign up

Export Citation Format

Share Document