Detecting Product Adoption Intentions via Multiview Deep Learning

Author(s):  
Zhu Zhang ◽  
Xuan Wei ◽  
Xiaolong Zheng ◽  
Qiudan Li ◽  
Daniel Dajun Zeng

Detecting product adoption intentions on social media could yield significant value in a wide range of applications, such as personalized recommendations and targeted marketing. In the literature, no study has explored the detection of product adoption intentions on social media, and only a few relevant studies have focused on purchase intention detection for products in one or several categories. Focusing on a product category rather than a specific product is too coarse-grained for precise advertising. Additionally, existing studies primarily focus on using one type of text representation in target social media posts, ignoring the major yet unexplored potential of fusing different text representations. In this paper, we first formulate the problem of product adoption intention mining and demonstrate the necessity of studying this problem and its practical value. To detect a product adoption intention for an individual product, we propose a novel and general multiview deep learning model that simultaneously taps into the capability of multiview learning in leveraging different representations and deep learning in learning latent data representations using a flexible nonlinear transformation. Specifically, the proposed model leverages three different text representations from a multiview perspective and takes advantage of local and long-term word relations by integrating convolutional neural network (CNN) and long short-term memory (LSTM) modules. Extensive experiments on three Twitter datasets demonstrate the effectiveness of the proposed multiview deep learning model compared with the existing benchmark methods. This study also significantly contributes research insights to the literature about intention mining and provides business value to relevant stakeholders such as product providers.

2020 ◽  
Vol 8 (6) ◽  
pp. 5730-5737

Digital Image Processing is application of computer algorithms to process, manipulate and interpret images. As a field it is playing an increasingly important role in many aspects of people’s daily life. Even though Image Processing has accomplished a great deal on its own, nowadays researches are being conducted in using it with Deep Learning (which is part of a broader family, Machine Learning) to achieve better performance in detecting and classifying objects in an image. Car’s License Plate Recognition is one of the hottest research topics in the domain of Image Processing (Computer Vision). It is having wide range of applications since license number is the primary and mandatory identifier of motor vehicles. When it comes to license plates in Ethiopia, they have unique features like Amharic characters, differing dimensions and plate formats. Although there is a research conducted on ELPR, it was attempted using the conventional image processing techniques but never with deep learning. In this proposed research an attempt is going to be made in tackling the problem of ELPR with deep learning and image processing. Tensorflow is going to be used in building the deep learning model and all the image processing is going to be done with OpenCV-Python. So, at the end of this research a deep learning model that recognizes Ethiopian license plates with better accuracy is going to be built.


2021 ◽  
Author(s):  
Gaurav Chachra ◽  
Qingkai Kong ◽  
Jim Huang ◽  
Srujay Korlakunta ◽  
Jennifer Grannen ◽  
...  

Abstract After significant earthquakes, we can see images posted on social media platforms by individuals and media agencies owing to the mass usage of smartphones these days. These images can be utilized to provide information about the shaking damage in the earthquake region both to the public and research community, and potentially to guide rescue work. This paper presents an automated way to extract the damaged building images after earthquakes from social media platforms such as Twitter and thus identify the particular user posts containing such images. Using transfer learning and ~6500 manually labelled images, we trained a deep learning model to recognize images with damaged buildings in the scene. The trained model achieved good performance when tested on newly acquired images of earthquakes at different locations and ran in near real-time on Twitter feed after the 2020 M7.0 earthquake in Turkey. Furthermore, to better understand how the model makes decisions, we also implemented the Grad-CAM method to visualize the important locations on the images that facilitate the decision.


Genes ◽  
2021 ◽  
Vol 12 (5) ◽  
pp. 717
Author(s):  
Arslan Siraj ◽  
Dae Yeong Lim ◽  
Hilal Tayara ◽  
Kil To Chong

Protein ubiquitylation is an essential post-translational modification process that performs a critical role in a wide range of biological functions, even a degenerative role in certain diseases, and is consequently used as a promising target for the treatment of various diseases. Owing to the significant role of protein ubiquitylation, these sites can be identified by enzymatic approaches, mass spectrometry analysis, and combinations of multidimensional liquid chromatography and tandem mass spectrometry. However, these large-scale experimental screening techniques are time consuming, expensive, and laborious. To overcome the drawbacks of experimental methods, machine learning and deep learning-based predictors were considered for prediction in a timely and cost-effective manner. In the literature, several computational predictors have been published across species; however, predictors are species-specific because of the unclear patterns in different species. In this study, we proposed a novel approach for predicting plant ubiquitylation sites using a hybrid deep learning model by utilizing convolutional neural network and long short-term memory. The proposed method uses the actual protein sequence and physicochemical properties as inputs to the model and provides more robust predictions. The proposed predictor achieved the best result with accuracy values of 80% and 81% and F-scores of 79% and 82% on the 10-fold cross-validation and an independent dataset, respectively. Moreover, we also compared the testing of the independent dataset with popular ubiquitylation predictors; the results demonstrate that our model significantly outperforms the other methods in prediction classification results.


2021 ◽  
Vol 11 (17) ◽  
pp. 7940
Author(s):  
Mohammed Al-Sarem ◽  
Abdullah Alsaeedi ◽  
Faisal Saeed ◽  
Wadii Boulila ◽  
Omair AmeerBakhsh

Spreading rumors in social media is considered under cybercrimes that affect people, societies, and governments. For instance, some criminals create rumors and send them on the internet, then other people help them to spread it. Spreading rumors can be an example of cyber abuse, where rumors or lies about the victim are posted on the internet to send threatening messages or to share the victim’s personal information. During pandemics, a large amount of rumors spreads on social media very fast, which have dramatic effects on people’s health. Detecting these rumors manually by the authorities is very difficult in these open platforms. Therefore, several researchers conducted studies on utilizing intelligent methods for detecting such rumors. The detection methods can be classified mainly into machine learning-based and deep learning-based methods. The deep learning methods have comparative advantages against machine learning ones as they do not require preprocessing and feature engineering processes and their performance showed superior enhancements in many fields. Therefore, this paper aims to propose a Novel Hybrid Deep Learning Model for Detecting COVID-19-related Rumors on Social Media (LSTM–PCNN). The proposed model is based on a Long Short-Term Memory (LSTM) and Concatenated Parallel Convolutional Neural Networks (PCNN). The experiments were conducted on an ArCOV-19 dataset that included 3157 tweets; 1480 of them were rumors (46.87%) and 1677 tweets were non-rumors (53.12%). The findings of the proposed model showed a superior performance compared to other methods in terms of accuracy, recall, precision, and F-score.


Author(s):  
Koyel Ghosh ◽  
Apurbalal Senapati

Coarse-grained tasks are primarily based on Text classification, one of the earliest problems in NLP, and these tasks are done on document and sentence levels. Here, our goal is to identify the technical domain of a given Bangla text. In Coarse-grained technical domain classification, such a piece of the Bangla text provides information about specific Coarse-grained technical domains like Biochemistry (bioche), Communication Technology (com-tech), Computer Science (cse), Management (mgmt), Physics (phy) Etc. This paper uses a recent deep learning model called the Bangla Bidirectional Encoder Representations Transformers (Bangla BERT) mechanism to identify the domain of a given text. Bangla BERT (Bangla-Bert-Base) is a pretrained language model of the Bangla language. Later, we discuss the Bangla BERT accuracy and compare it with other models that solve the same problem.


Author(s):  
Pakindessama M. Konkobo ◽  
Rui Zhang ◽  
Siyuan Huang ◽  
Toussida T. Minoungou ◽  
Jose A. Ouedraogo ◽  
...  

Webology ◽  
2020 ◽  
Vol 17 (2) ◽  
pp. 663-676
Author(s):  
Tapas Guha ◽  
K.G. Mohan

With the omnipresence of user feedbacks in social media, mining of relevant opinion and extracting the underlying sentiment to analyze synthetic emotion towards a specific product, person, topic or event has become a vast domain of research in recent times. A thorough survey of the early unimodal and multimodal sentiment classification approaches reveals that researchers mostly relied on either corpus based techniques or those based on machine learning algorithms. Lately, Deep learning models progressed profoundly in the area of image processing. This success has been efficiently directed towards enhancements in sentiment categorization. A hybrid deep learning model consisting of Convolutional Neural Network (CNN) and stacked bidirectional Long Short Term Memory (BiLSTM) over pre-trained word vectors is proposed in this paper to achieve long-term sentiment analysis. This work experiments with various hyperparameters and optimization techniques to make the model get rid of overfitting and to achieve optimal performance. It has been validated on two standard sentiment datasets, Stanford Large Movie Review (IMDB) and Stanford Sentiment Treebank2 Dataset (SST2). It achieves a competitive advantage over other models like CNN, LSTM and ensemble of CNN-LSTM by attaining better accuracy and also produces high F measure.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Nida Aslam ◽  
Irfan Ullah Khan ◽  
Farah Salem Alotaibi ◽  
Lama Abdulaziz Aldaej ◽  
Asma Khaled Aldubaikil

Pervasive usage and the development of social media networks have provided the platform for the fake news to spread fast among people. Fake news often misleads people and creates wrong society perceptions. The spread of low-quality news in social media has negatively affected individuals and society. In this study, we proposed an ensemble-based deep learning model to classify news as fake or real using LIAR dataset. Due to the nature of the dataset attributes, two deep learning models were used. For the textual attribute “statement,” Bi-LSTM-GRU-dense deep learning model was used, while for the remaining attributes, dense deep learning model was used. Experimental results showed that the proposed study achieved an accuracy of 0.898, recall of 0.916, precision of 0.913, and F-score of 0.914, respectively, using only statement attribute. Moreover, the outcome of the proposed models is remarkable when compared with that of the previous studies for fake news detection using LIAR dataset.


2020 ◽  
Vol 12 (24) ◽  
pp. 4068
Author(s):  
Zihao Leng ◽  
Jie Zhang ◽  
Yi Ma ◽  
Jingyu Zhang

The Liaodong Shoal in the east of the Bohai Sea has obvious water depth variation. The clear shallow water area and deep turbid area coexist, which is characterized by complex submarine topography. The traditional semi-theoretical and semi-empirical models are often difficult to provide optimal inversion results. In this paper, based on the traditional principle of water depth inversion in shallow areas, a new framework is proposed in combination with the deep turbid sea area. This new framework extends the application of traditional optical water depth inversion methods, can meet the needs of the depth inversion work in the composite sea environment. Moreover, the gate recurrent unit (GRU) deep-learning model is introduced to approximate the unified inversion model by numerical calculation. In this paper, based on the above-mentioned inversion framework, the water depth inversion work is processed by using the wide range images of GF-1 satellite, then the relevant analysis and accuracy evaluation are carried out. The results show that: (1) for the overall water depth inversion, the determination coefficient R2 is higher than 0.9 and the MRE is lower than 20% are obtained, and the evaluation index shows that the GRU model can better retrieve the underwater topography of this region. (2) Compared with the traditional log-linear model, Stumpf model, and multi-layer feedforward neural network, the GRU model was significantly improved in various evaluation indices. (3) The model has the best inversion performance in the 24–32 m-depth section, with a MRE of about 4% and a MAE of about 1.42 m, which is more suitable for the inversion work in the comparative section area. (4) The inversion diagram indicates that this model can well reflect the regional seabed characteristics of multiple radial sand ridges, and the overall inversion result is excellent and practical.


Sign in / Sign up

Export Citation Format

Share Document