Asymmetric effect of feature level sentiment on product rating: an application of bigram natural language processing (NLP) analysis

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Yun Kyung Oh ◽  
Jisu Yi

PurposeThe evaluation of perceived attribute performance reflected in online consumer reviews (OCRs) is critical in gaining timely marketing insights. This study proposed a text mining approach to measure consumer sentiments at the feature level and their asymmetric impacts on overall product ratings.Design/methodology/approachThis study employed 49,130 OCRs generated for 14 wireless earbud products on Amazon.com. Word combinations of the major quality dimensions and related sentiment words were identified using bigram natural language processing (NLP) analysis. This study combined sentiment dictionaries and feature-related bigrams and measured feature level sentiment scores in a review. Furthermore, the authors examined the effect of feature level sentiment on product ratings.FindingsThe results indicate that customer sentiment for product features measured from text reviews significantly and asymmetrically affects the overall rating. Building upon the three-factor theory of customer satisfaction, the key quality dimensions of wireless earbuds are categorized into basic, excitement and performance factors.Originality/valueThis study provides a novel approach to assess customer feature level evaluation of a product and its impact on customer satisfaction based on big data analytics. By applying the suggested methodology, marketing managers can gain in-depth insights into consumer needs and reflect this knowledge in their future product or service improvement.

2019 ◽  
Vol 75 (1) ◽  
pp. 314-318 ◽  
Author(s):  
Nigel L. Williams ◽  
Nicole Ferdinand ◽  
John Bustard

Purpose Advances in artificial intelligence (AI) natural language processing may see the emergence of algorithmic word of mouth (aWOM), content created and shared by automated tools. As AI tools improve, aWOM will increase in volume and sophistication, displacing eWOM as an influence on customer decision-making. The purpose of this paper is to provide an overview of the socio technological trends that have encouraged the evolution of informal infulence strategies from WOM to aWOM. Design/methodology/approach This paper examines the origins and path of development of influential customer communications from word of mouth (WOM) to electronic word of mouth (eWOM) and the emerging trend of aWOM. The growth of aWOM is theorized as a result of new developments in AI natural language processing tools along with autonomous distribution systems in the form of software robots and virtual assistants. Findings aWOM may become a dominant source of information for tourists, as it can support multimodal delivery of useful contextual information. Individuals, organizations and social media platforms will have to ensure that aWOM is developed and deployed responsibly and ethically. Practical implications aWOM may emerge as the dominant source of information for tourist decision-making, displacing WOM or eWOM. aWOM may also impact online opinion leaders, as they may be challenged by algorithmically generated content. aWOM tools may also generate content using sensors on personal devices, creating privacy and information security concerns if users did not give permission for such activities. Originality/value This paper is the first to theorize the emergence of aWOM as autonomous AI communication within the framework of unpaid influence or WOM. As customer engagement will increasingly occur in algorithmic environments that comprise person–machine interactions, aWOM will influence future tourism research and practice.


2022 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Krishnadas Nanath ◽  
Supriya Kaitheri ◽  
Sonia Malik ◽  
Shahid Mustafa

Purpose The purpose of this paper is to examine the factors that significantly affect the prediction of fake news from the virality theory perspective. The paper looks at a mix of emotion-driven content, sentimental resonance, topic modeling and linguistic features of news articles to predict the probability of fake news. Design/methodology/approach A data set of over 12,000 articles was chosen to develop a model for fake news detection. Machine learning algorithms and natural language processing techniques were used to handle big data with efficiency. Lexicon-based emotion analysis provided eight kinds of emotions used in the article text. The cluster of topics was extracted using topic modeling (five topics), while sentiment analysis provided the resonance between the title and the text. Linguistic features were added to the coding outcomes to develop a logistic regression predictive model for testing the significant variables. Other machine learning algorithms were also executed and compared. Findings The results revealed that positive emotions in a text lower the probability of news being fake. It was also found that sensational content like illegal activities and crime-related content were associated with fake news. The news title and the text exhibiting similar sentiments were found to be having lower chances of being fake. News titles with more words and content with fewer words were found to impact fake news detection significantly. Practical implications Several systems and social media platforms today are trying to implement fake news detection methods to filter the content. This research provides exciting parameters from a viral theory perspective that could help develop automated fake news detectors. Originality/value While several studies have explored fake news detection, this study uses a new perspective on viral theory. It also introduces new parameters like sentimental resonance that could help predict fake news. This study deals with an extensive data set and uses advanced natural language processing to automate the coding techniques in developing the prediction model.


Kybernetes ◽  
2022 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Yu-Hui Wang ◽  
Guan-Yu Lin

PurposeThe purposes of this paper are (1) to explore the overall development of AI technologies and applications that have been demonstrated to be fundamentally important in the healthcare industry, and their related commercialized products and (2) to identify technologies with promise as the basis of useful applications and profitable products in the AI-healthcare domain.Design/methodology/approachThis study adopts a technology-driven technology roadmap approach, combined with natural language processing (NLP)-based patents analysis, to identify promising and potentially profitable existing AI technologies and products in the domain of AI healthcare.FindingsRobotics technology exhibits huge potential in surgical and diagnostics applications. Intuitive Surgical Inc., manufacturer of the Da Vinci robotic system and Ion robotic lung-biopsy system, dominates the robotics-assisted surgical and diagnostic fields. Diagnostics and medical imaging are particularly active fields for the application of AI, not only for analysis of CT and MRI scans, but also for image archiving and communications.Originality/valueThis study is a pioneering attempt to clarify the interrelationships of particular promising technologies for application and related products in the AI-healthcare domain. Its findings provide critical information about the patent activities of key incumbent actors, and thus offer important insights into recent and current technological and product developments in the emergent AI-healthcare sector.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Venkateswara Rao Kota ◽  
Shyamala Devi Munisamy

PurposeNeural network (NN)-based deep learning (DL) approach is considered for sentiment analysis (SA) by incorporating convolutional neural network (CNN), bi-directional long short-term memory (Bi-LSTM) and attention methods. Unlike the conventional supervised machine learning natural language processing algorithms, the authors have used unsupervised deep learning algorithms.Design/methodology/approachThe method presented for sentiment analysis is designed using CNN, Bi-LSTM and the attention mechanism. Word2vec word embedding is used for natural language processing (NLP). The discussed approach is designed for sentence-level SA which consists of one embedding layer, two convolutional layers with max-pooling, one LSTM layer and two fully connected (FC) layers. Overall the system training time is 30 min.FindingsThe method performance is analyzed using metrics like precision, recall, F1 score, and accuracy. CNN is helped to reduce the complexity and Bi-LSTM is helped to process the long sequence input text.Originality/valueThe attention mechanism is adopted to decide the significance of every hidden state and give a weighted sum of all the features fed as input.


2020 ◽  
Vol 30 (2) ◽  
pp. 155-174
Author(s):  
Tim Hutchinson

Purpose This study aims to provide an overview of recent efforts relating to natural language processing (NLP) and machine learning applied to archival processing, particularly appraisal and sensitivity reviews, and propose functional requirements and workflow considerations for transitioning from experimental to operational use of these tools. Design/methodology/approach The paper has four main sections. 1) A short overview of the NLP and machine learning concepts referenced in the paper. 2) A review of the literature reporting on NLP and machine learning applied to archival processes. 3) An overview and commentary on key existing and developing tools that use NLP or machine learning techniques for archives. 4) This review and analysis will inform a discussion of functional requirements and workflow considerations for NLP and machine learning tools for archival processing. Findings Applications for processing e-mail have received the most attention so far, although most initiatives have been experimental or project based. It now seems feasible to branch out to develop more generalized tools for born-digital, unstructured records. Effective NLP and machine learning tools for archival processing should be usable, interoperable, flexible, iterative and configurable. Originality/value Most implementations of NLP for archives have been experimental or project based. The main exception that has moved into production is ePADD, which includes robust NLP features through its named entity recognition module. This paper takes a broader view, assessing the prospects and possible directions for integrating NLP tools and techniques into archival workflows.


Author(s):  
Luisa Andreu ◽  
Enrique Bigne ◽  
Suzanne Amaro ◽  
Jesús Palomo

Purpose The purpose of this study is to examine Airbnb research using bibliometric methods. Using research performance analysis, this study highlights and provides an updated overview of Airbnb research by revealing patterns in journals, papers and most influential authors and countries. Furthermore, it graphically illustrates how research themes have evolved by mapping a co-word analysis and points out potential trends for future research. Design/methodology/approach The methodological design for this study involves three phases: the document source selection, the definition of the variables to be analyzed and the bibliometric analysis. A statistical multivariate analysis of all the documents’ characteristics was performed with R software. Furthermore, natural language processing techniques were used to analyze all the abstracts and keywords specified in the 129 selected documents. Findings Results show the genesis and evolution of publications on Airbnb research, scatter of journals and journals’ characteristics, author and productivity characteristics, geographical distribution of the research and content analysis using keywords. Research limitations/implications Despite Airbnb having a history of 10 years, research publications only started in 2015. Therefore, the bibliometric study includes papers from 2015 to 2019. One of the main limitations is that papers were selected in October of 2019, before the year was over. However, the latest academic publications (in press and earlycite) were included in the analysis. Originality/value This study analyzed bibliometric set of laws (Price’s, Lotka’s and Bradford’s) to better understand the patterns of the most relevant scientific production regarding Airbnb in tourism and hospitality journals. Using natural language processing techniques, this study analyzes all the abstracts and keywords specified in the selected documents. Results show the evolution of research topics in four periods: 2015-2016, 2017, 2018 and 2019.


2019 ◽  
Vol 43 (4) ◽  
pp. 676-690
Author(s):  
Zehra Taskin ◽  
Umut Al

Purpose With the recent developments in information technologies, natural language processing (NLP) practices have made tasks in many areas easier and more practical. Nowadays, especially when big data are used in most research, NLP provides fast and easy methods for processing these data. The purpose of this paper is to identify subfields of library and information science (LIS) where NLP can be used and to provide a guide based on bibliometrics and social network analyses for researchers who intend to study this subject. Design/methodology/approach Within the scope of this study, 6,607 publications, including NLP methods published in the field of LIS, are examined and visualized by social network analysis methods. Findings After evaluating the obtained results, the subject categories of publications, frequently used keywords in these publications and the relationships between these words are revealed. Finally, the core journals and articles are classified thematically for researchers working in the field of LIS and planning to apply NLP in their research. Originality/value The results of this paper draw a general framework for LIS field and guides researchers on new techniques that may be useful in the field.


Sign in / Sign up

Export Citation Format

Share Document