How consumer opinions are affected by marketers: an empirical examination by deep learning approach

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Billy Yu

PurposeThe natural language processing (NLP) technique enables machines to understand human language. This paper seeks to harness its power to recognise the interaction between marketers and consumers. Hence, this study aims to enhance the conceptual and future development of deep learning in interactive marketing.Design/methodology/approachThis study measures cognitive responses by using actual user postings. Following a typical NLP analysis pipeline with tailored neural network (NN) models, it presents a stylised quantitative method to manifest the underlying relation.FindingsBased on consumer-generated content (CGC) and marketer-generated content (MGC) in the tourism industry, the results reveal that marketers and consumers interact in a subtle way. This study explores beyond simple positive and negative framing, and reveals that they do not resemble each other, not even in abstract form: CGC may complement MGC, but they are incongruent. It validates and supplements preceding findings in the framing effect literature and underpins some marketing wisdom in practice.Research limitations/implicationsThis research inherits a fundamental limitation of NN model that result interpretability is low. Also, the study may capture the partial phenomenon exhibited by active reviewers; lurker-consumers may behave differently.Originality/valueThis research is among the first to explore the interactive aspect of the framing effect with state-of-the-art deep learning language model. It reveals research opportunities by using NLP-extracted latent features to assess textual opinions. It also demonstrates the accessibility of deep learning tools. Practitioners could use the described blueprint to foster their marketing initiatives.

AI ◽  
2021 ◽  
Vol 2 (1) ◽  
pp. 1-16
Author(s):  
Juan Cruz-Benito ◽  
Sanjay Vishwakarma ◽  
Francisco Martin-Fernandez ◽  
Ismael Faro

In recent years, the use of deep learning in language models has gained much attention. Some research projects claim that they can generate text that can be interpreted as human writing, enabling new possibilities in many application areas. Among the different areas related to language processing, one of the most notable in applying this type of modeling is programming languages. For years, the machine learning community has been researching this software engineering area, pursuing goals like applying different approaches to auto-complete, generate, fix, or evaluate code programmed by humans. Considering the increasing popularity of the deep learning-enabled language models approach, we found a lack of empirical papers that compare different deep learning architectures to create and use language models based on programming code. This paper compares different neural network architectures like Average Stochastic Gradient Descent (ASGD) Weight-Dropped LSTMs (AWD-LSTMs), AWD-Quasi-Recurrent Neural Networks (QRNNs), and Transformer while using transfer learning and different forms of tokenization to see how they behave in building language models using a Python dataset for code generation and filling mask tasks. Considering the results, we discuss each approach’s different strengths and weaknesses and what gaps we found to evaluate the language models or to apply them in a real programming context.


2020 ◽  
Vol 14 (4) ◽  
pp. 471-484
Author(s):  
Suraj Shetiya ◽  
Saravanan Thirumuruganathan ◽  
Nick Koudas ◽  
Gautam Das

Accurate selectivity estimation for string predicates is a long-standing research challenge in databases. Supporting pattern matching on strings (such as prefix, substring, and suffix) makes this problem much more challenging, thereby necessitating a dedicated study. Traditional approaches often build pruned summary data structures such as tries followed by selectivity estimation using statistical correlations. However, this produces insufficiently accurate cardinality estimates resulting in the selection of sub-optimal plans by the query optimizer. Recently proposed deep learning based approaches leverage techniques from natural language processing such as embeddings to encode the strings and use it to train a model. While this is an improvement over traditional approaches, there is a large scope for improvement. We propose Astrid, a framework for string selectivity estimation that synthesizes ideas from traditional and deep learning based approaches. We make two complementary contributions. First, we propose an embedding algorithm that is query-type (prefix, substring, and suffix) and selectivity aware. Consider three strings 'ab', 'abc' and 'abd' whose prefix frequencies are 1000, 800 and 100 respectively. Our approach would ensure that the embedding for 'ab' is closer to 'abc' than 'abd'. Second, we describe how neural language models could be used for selectivity estimation. While they work well for prefix queries, their performance for substring queries is sub-optimal. We modify the objective function of the neural language model so that it could be used for estimating selectivities of pattern matching queries. We also propose a novel and efficient algorithm for optimizing the new objective function. We conduct extensive experiments over benchmark datasets and show that our proposed approaches achieve state-of-the-art results.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Raffaele Filieri ◽  
Elettra D’Amico ◽  
Alessandro Destefanis ◽  
Emilio Paolucci ◽  
Elisabetta Raguseo

Purpose The travel and tourism industry (TTI) could benefit the most from artificial intelligence (AI), which could reshape this industry. This study aims to explore the characteristics of tourism AI start-ups, the AI technological domains financed by Venture Capitalists (VCs), and the phases of the supply chain where the AI domains are in high demand. Design/methodology/approach This study developed a database of the European AI start-ups operating in the TTI from the Crunchbase database (2005–2020). The authors used start-ups as the unit of analysis as they often foster radical change. The authors complemented quantitative and qualitative methods. Findings AI start-ups have been mainly created by male Science, Technology, Engineering and Mathematics graduates between 2015 and 2017. The number of founders and previous study experience in non-start-up companies was positively related to securing a higher amount of funding. European AI start-ups are concentrated in the capital town of major tourism destinations (France, UK and Spain). The AI technological domains that received more funding from VCs were Learning, Communication and Services (i.e. big data, machine learning and natural language processing), indicating a strong interest in AI solutions enabling marketing automation, segmentation and customisation. Furthermore, VC-backed AI solutions focus on the pre-trip and post-trip. Originality/value To the best of the authors’ knowledge, this is the first study focussing on digital entrepreneurship, specifically VC-backed AI start-ups operating in the TTI. The authors apply, for the first time, a mixed-method approach in the study of tourism entrepreneurship.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Venkateswara Rao Kota ◽  
Shyamala Devi Munisamy

PurposeNeural network (NN)-based deep learning (DL) approach is considered for sentiment analysis (SA) by incorporating convolutional neural network (CNN), bi-directional long short-term memory (Bi-LSTM) and attention methods. Unlike the conventional supervised machine learning natural language processing algorithms, the authors have used unsupervised deep learning algorithms.Design/methodology/approachThe method presented for sentiment analysis is designed using CNN, Bi-LSTM and the attention mechanism. Word2vec word embedding is used for natural language processing (NLP). The discussed approach is designed for sentence-level SA which consists of one embedding layer, two convolutional layers with max-pooling, one LSTM layer and two fully connected (FC) layers. Overall the system training time is 30 min.FindingsThe method performance is analyzed using metrics like precision, recall, F1 score, and accuracy. CNN is helped to reduce the complexity and Bi-LSTM is helped to process the long sequence input text.Originality/valueThe attention mechanism is adopted to decide the significance of every hidden state and give a weighted sum of all the features fed as input.


2020 ◽  
Vol 30 (2) ◽  
pp. 155-174
Author(s):  
Tim Hutchinson

Purpose This study aims to provide an overview of recent efforts relating to natural language processing (NLP) and machine learning applied to archival processing, particularly appraisal and sensitivity reviews, and propose functional requirements and workflow considerations for transitioning from experimental to operational use of these tools. Design/methodology/approach The paper has four main sections. 1) A short overview of the NLP and machine learning concepts referenced in the paper. 2) A review of the literature reporting on NLP and machine learning applied to archival processes. 3) An overview and commentary on key existing and developing tools that use NLP or machine learning techniques for archives. 4) This review and analysis will inform a discussion of functional requirements and workflow considerations for NLP and machine learning tools for archival processing. Findings Applications for processing e-mail have received the most attention so far, although most initiatives have been experimental or project based. It now seems feasible to branch out to develop more generalized tools for born-digital, unstructured records. Effective NLP and machine learning tools for archival processing should be usable, interoperable, flexible, iterative and configurable. Originality/value Most implementations of NLP for archives have been experimental or project based. The main exception that has moved into production is ePADD, which includes robust NLP features through its named entity recognition module. This paper takes a broader view, assessing the prospects and possible directions for integrating NLP tools and techniques into archival workflows.


Author(s):  
Zhuang Liu ◽  
Degen Huang ◽  
Kaiyu Huang ◽  
Zhuang Li ◽  
Jun Zhao

There is growing interest in the tasks of financial text mining. Over the past few years, the progress of Natural Language Processing (NLP) based on deep learning advanced rapidly. Significant progress has been made with deep learning showing promising results on financial text mining models. However, as NLP models require large amounts of labeled training data, applying deep learning to financial text mining is often unsuccessful due to the lack of labeled training data in financial fields. To address this issue, we present FinBERT (BERT for Financial Text Mining) that is a domain specific language model pre-trained on large-scale financial corpora. In FinBERT, different from BERT, we construct six pre-training tasks covering more knowledge, simultaneously trained on general corpora and financial domain corpora, which can enable FinBERT model better to capture language knowledge and semantic information. The results show that our FinBERT outperforms all current state-of-the-art models. Extensive experimental results demonstrate the effectiveness and robustness of FinBERT. The source code and pre-trained models of FinBERT are available online.


2021 ◽  
Author(s):  
Yoojoong Kim ◽  
Jeong Moon Lee ◽  
Moon Joung Jang ◽  
Yun Jin Yum ◽  
Jong-Ho Kim ◽  
...  

BACKGROUND With advances in deep learning and natural language processing, analyzing medical texts is becoming increasingly important. Nonetheless, a study on medical-specific language models has not yet been conducted given the importance of medical texts. OBJECTIVE Korean medical text is highly difficult to analyze because of the agglutinative characteristics of the language as well as the complex terminologies in the medical domain. To solve this problem, we collected a Korean medical corpus and used it to train language models. METHODS In this paper, we present a Korean medical language model based on deep learning natural language processing. The proposed model was trained using the pre-training framework of BERT for the medical context based on a state-of-the-art Korean language model. RESULTS After pre-training, the proposed method showed increased accuracies of 0.147 and 0.148 for the masked language model with next sentence prediction. In the intrinsic evaluation, the next sentence prediction accuracy improved by 0.258, which is a remarkable enhancement. In addition, the extrinsic evaluation of Korean medical semantic textual similarity data showed a 0.046 increase in the Pearson correlation. CONCLUSIONS The results demonstrated the superiority of the proposed model for Korean medical natural language processing. We expect that our proposed model can be extended for application to various languages and domains.


2017 ◽  
Vol 72 (2) ◽  
pp. 156-170 ◽  
Author(s):  
Muhammad Sabbir Rahman ◽  
Hasliza Hassan ◽  
Aahad Osman-Gani ◽  
Fadi Abdel Muniem Abdel Fattah ◽  
Md. Aftab Anwar

PurposeThe purpose of this paper is to test a conceptual model that takes into account both edu-tourists’ perception and perceived service quality in explaining purchase intention of academic degrees from the foreign universities. Design/methodology/approachThe study is based on an empirical examination with applying multivariate data analysis. The data were collected through survey questionnaires and analysed by using structural equation modelling procedure. FindingsThe survey result discovered that the relationship between perceived service quality and edu-tourist’s satisfaction was significant and positive. The relationship between edu-tourist’s satisfaction and intention to purchase was also significant and positive. Meanwhile, edu-tourist’s satisfaction partially mediates the relationship between their perceived service quality and intention to purchase. Nevertheless, this research also explored that the edu-tourist’s satisfaction plays a significant mediating effect in between the relationship of perception and intention to purchase. Research limitations/implicationsThis empirical study will contribute in understanding the behaviour of international students to construct the theoretical knowledge on the edu-tourism industry, which has been neglected in tourism research. Originality/valueThe paper will be of use to the management and policymakers in the higher education sector in understanding the customer’s expectation for the edu-tourism destination. This study contributes to the growing literature on education travel destination, where the researchers investigated the role of tourist’s satisfaction by using perception and perceived service quality towards their intention to visit a destination for education tourism. In addition, understanding the role satisfaction on the relationship between perception and perceived service quality towards the purchase intention will make both scientific and practical contributions for the decision-makers.


Author(s):  
Amsal Pardamean ◽  
Hilman F. Pardede

Online medias are currently the dominant source of Information due to not being limited by time and place, fast and wide distributions. However, inaccurate news, or often referred as fake news is a major problem in news dissemination for online medias. Inaccurate news is information that is not true, that is engineered to cover the real information and has no factual basis. Usually, inaccurate news is made in the form of news that has mass appeal and is presented in the guise of genuine and legitimate news nuances to deceive or change the reader's mind or opinion. Identification of inaccurate news from real news can be done with natural language processing (NLP) technologies. In this paper, we proposed bidirectional encoder representations from transformers (BERT) for inaccurate news identification. BERT is a language model based on deep learning technologies and it has found effective for many NLP tasks. In this study, we use transfer learning and fine-tuning to adapt BERT for inaccurate news identification. The experiments show that our method could achieve accuracy of 99.23%, recall 99.46%, precision 98.86%, and F-Score of 99.15%. It is largely better than traditional method for the same tasks.


Author(s):  
Sumit Kaur

Abstract- Deep learning is an emerging research area in machine learning and pattern recognition field which has been presented with the goal of drawing Machine Learning nearer to one of its unique objectives, Artificial Intelligence. It tries to mimic the human brain, which is capable of processing and learning from the complex input data and solving different kinds of complicated tasks well. Deep learning (DL) basically based on a set of supervised and unsupervised algorithms that attempt to model higher level abstractions in data and make it self-learning for hierarchical representation for classification. In the recent years, it has attracted much attention due to its state-of-the-art performance in diverse areas like object perception, speech recognition, computer vision, collaborative filtering and natural language processing. This paper will present a survey on different deep learning techniques for remote sensing image classification. 


Sign in / Sign up

Export Citation Format

Share Document