scholarly journals Analysis of Parkinson’s Disease using Deep Learning and Word Embedding Models

2019 ◽  
Vol 2 (3) ◽  
pp. 786-797
Author(s):  
Feyza Cevik ◽  
Zeynep Hilal Kilimci

Parkinson's disease is a common neurodegenerative neurological disorder, which affects the patient's quality of life, has significant social and economic effects, and is difficult to diagnose early due to the gradual appearance of symptoms. Examining the discussion of Parkinson’s disease in social media platforms such as Twitter provides a platform where patients communicate each other in both diagnosis and treatment stage of the Parkinson’s disease. The purpose of this work is to evaluate and compare the sentiment analysis of people about Parkinson's disease by using deep learning and word embedding models. To the best of our knowledge, this is the very first study to analyze Parkinson's disease from social media by using word embedding models and deep learning algorithms. In this study, Word2Vec, GloVe, and FastText are employed as word embedding models for the purpose of enriching tweets in terms of semantic, context, and syntax. Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Long Short-Term Memory Networks (LSTMs) are implemented for the classification task. This study demonstrates the efficiency of using word embedding models and deep learning algorithms to understand the needs of patients’ and provide a valuable contribution to the treatment process by analyzing sentiments of them with 93.63% accuracy performance.

Author(s):  
Abdullah Talha Kabakus

As a natural consequence of offering many advantages to their users, social media platforms have become a part of daily lives. Recent studies emphasize the necessity of an automated way of detecting the offensive posts in social media since these ‘toxic’ posts have become pervasive. To this end, a novel toxic post detection approach based on Deep Neural Networks was proposed within this study. Given that several word embedding methods exist, we shed light on which word embedding method produces better results when employed with the five most common types of deep neural networks, namely,  , , , , and a combination of  and . To this end, the word vectors for the given comments were obtained through four different methods, namely, () , () , () , and () the  layer of deep neural networks. Eventually, a total of twenty benchmark models were proposed and both trained and evaluated on a gold standard dataset which consists of  tweets. According to the experimental result, the best , , was obtained on the proposed  model without employing pre-trained word vectors which outperformed the state-of-the-art works and implies the effective embedding ability of s. Other key findings obtained through the conducted experiments are that the models, that constructed word embeddings through the  layers, obtained higher s and converged much faster than the models that utilized pre-trained word vectors.


Social media sites such as Twitter, Facebook, Tumblretc, are vastly popular among the general population. People post updates, tweets etc., and almost 75% of the times, these posts are a combination of emotions. The idea is to analyze suicidal-depression tendencies in adults with traumatizing experiences or socio-economic difficulties. This makes the overall analysis of sentiments especially extremely complex, which we aim to resolve here in this project by breaking down all the sentences into individual words, and along with emoticons and hashtags, converting each one of them into tokens, and then applying deep learning algorithms on the same, to accurately determine the sentiments of given messages. The objective of the project undertaken is to determine the suicidal- sentiment of various depressed individuals, and how likely is it that they are inclined to commit suicide on the basis of their tweets.


2021 ◽  
Author(s):  
Panagiotis Mavritsakis ◽  
Marie-Claire ten Veldhuis ◽  
Marc Schleiss ◽  
Riccardo Taormina

<p>Large parts of the world rely on rainfed agriculture for their food security. In Africa, 90% of the agricultural yields rely only on precipitation for irrigation purposes and approximately 80% of the population’s livelihood is highly dependent on its food production. Parts of Ghana are prone to droughts and flood events due to increasing variability of precipitation phenomena. Crop growth is sensitive to the wet- and dry-spell phenomena during the rainy season. To support rural communities and small farmer in their efforts to adapt to climate change and natural variability, it is crucial to have good predictions of rainfall and related dry/wet spell indices.</p><p>This research constitutes an attempt to assess the dry-spell patterns in the northern region of Ghana, near Burkina Faso. We aim to develop a model which by exploiting satellite products overcomes the poor temporal and spatial coverage of existing ground precipitation measurements. For this purpose 14 meteorological stations featuring different temporal coverage are used together with satellite-based precipitation or cloud top temperature products.</p><p>We will compare conventional copula models and deep-learning algorithms to establish a link between satellite products and field rainfall data for dry-spell assessment. The deep-learning architecture used should be able to both have the feature of convolution (Convolutional Neural Networks) and the ability to capture a sequence (Recurrent Neural Networks). The deep-learning architecture used for this purpose is the Long Short-Term Memory networks (LSTMs). Regarding the copula modeling, the Archimedean, the Gaussian and the extreme copulas will be examined as modeling options.</p><p>Using these models we will attempt to exploit the long temporal coverage of the satellite products in order to overcome the poor temporal and spatial coverage of existing ground precipitation measurements. Doing that, our final objective is to enhance our knowledge about the dry-spell characteristics and, thus, provide more reliable climatic information to the farmers in the area of Northern Ghana.</p>


Author(s):  
Sindhu P. Menon

In the last couple of years, artificial neural networks have gained considerable momentum. Their results could be enhanced if the number of layers could be made deeper. Of late, a lot of data has been generated, which has led to big data. This comes along with many challenges like quality, which is one of the most important ones. Deep learning models can improve the quality of data. In this chapter, an attempt has been made to review deep supervised and deep unsupervised learning algorithms and the various activation functions used. Challenges in deep learning have also been discussed.


Author(s):  
Rene Avalloni de Morais ◽  
Baidya Nath Saha

Deep learning algorithms have received dramatic progress in the area of natural language processing and automatic human speech recognition. However, the accuracy of the deep learning algorithms depends on the amount and quality of the data and training deep models requires high-performance computing resources. In this backdrop, this paper adresses an end-to-end speech recognition system where we finetune Mozilla DeepSpeech architecture using two different datasets: LibriSpeech clean dataset and Harvard speech dataset. We train Long Short Term Memory (LSTM) based deep Recurrent Neural Netowrk (RNN) models in Google Colab platform and use their GPU resources. Extensive experimental results demonstrate that Mozilla DeepSpeech model could be fine-tuned for different audio datasets to recognize speeches successfully.


Kybernetes ◽  
2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Shubham Bharti ◽  
Arun Kumar Yadav ◽  
Mohit Kumar ◽  
Divakar Yadav

PurposeWith the rise of social media platforms, an increasing number of cases of cyberbullying has reemerged. Every day, large number of people, especially teenagers, become the victim of cyber abuse. A cyberbullied person can have a long-lasting impact on his mind. Due to it, the victim may develop social anxiety, engage in self-harm, go into depression or in the extreme cases, it may lead to suicide. This paper aims to evaluate various techniques to automatically detect cyberbullying from tweets by using machine learning and deep learning approaches.Design/methodology/approachThe authors applied machine learning algorithms approach and after analyzing the experimental results, the authors postulated that deep learning algorithms perform better for the task. Word-embedding techniques were used for word representation for our model training. Pre-trained embedding GloVe was used to generate word embedding. Different versions of GloVe were used and their performance was compared. Bi-directional long short-term memory (BLSTM) was used for classification.FindingsThe dataset contains 35,787 labeled tweets. The GloVe840 word embedding technique along with BLSTM provided the best results on the dataset with an accuracy, precision and F1 measure of 92.60%, 96.60% and 94.20%, respectively.Research limitations/implicationsIf a word is not present in pre-trained embedding (GloVe), it may be given a random vector representation that may not correspond to the actual meaning of the word. It means that if a word is out of vocabulary (OOV) then it may not be represented suitably which can affect the detection of cyberbullying tweets. The problem may be rectified through the use of character level embedding of words.Practical implicationsThe findings of the work may inspire entrepreneurs to leverage the proposed approach to build deployable systems to detect cyberbullying in different contexts such as workplace, school, etc and may also draw the attention of lawmakers and policymakers to create systemic tools to tackle the ills of cyberbullying.Social implicationsCyberbullying, if effectively detected may save the victims from various psychological problems which, in turn, may lead society to a healthier and more productive life.Originality/valueThe proposed method produced results that outperform the state-of-the-art approaches in detecting cyberbullying from tweets. It uses a large dataset, created by intelligently merging two publicly available datasets. Further, a comprehensive evaluation of the proposed methodology has been presented.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3279
Author(s):  
Maria Habib ◽  
Mohammad Faris ◽  
Raneem Qaddoura ◽  
Manal Alomari ◽  
Alaa Alomari ◽  
...  

Maintaining a high quality of conversation between doctors and patients is essential in telehealth services, where efficient and competent communication is important to promote patient health. Assessing the quality of medical conversations is often handled based on a human auditory-perceptual evaluation. Typically, trained experts are needed for such tasks, as they follow systematic evaluation criteria. However, the daily rapid increase of consultations makes the evaluation process inefficient and impractical. This paper investigates the automation of the quality assessment process of patient–doctor voice-based conversations in a telehealth service using a deep-learning-based classification model. For this, the data consist of audio recordings obtained from Altibbi. Altibbi is a digital health platform that provides telemedicine and telehealth services in the Middle East and North Africa (MENA). The objective is to assist Altibbi’s operations team in the evaluation of the provided consultations in an automated manner. The proposed model is developed using three sets of features: features extracted from the signal level, the transcript level, and the signal and transcript levels. At the signal level, various statistical and spectral information is calculated to characterize the spectral envelope of the speech recordings. At the transcript level, a pre-trained embedding model is utilized to encompass the semantic and contextual features of the textual information. Additionally, the hybrid of the signal and transcript levels is explored and analyzed. The designed classification model relies on stacked layers of deep neural networks and convolutional neural networks. Evaluation results show that the model achieved a higher level of precision when compared with the manual evaluation approach followed by Altibbi’s operations team.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2852
Author(s):  
Parvathaneni Naga Srinivasu ◽  
Jalluri Gnana SivaSai ◽  
Muhammad Fazal Ijaz ◽  
Akash Kumar Bhoi ◽  
Wonjoon Kim ◽  
...  

Deep learning models are efficient in learning the features that assist in understanding complex patterns precisely. This study proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM). The MobileNet V2 model proved to be efficient with a better accuracy that can work on lightweight computational devices. The proposed model is efficient in maintaining stateful information for precise predictions. A grey-level co-occurrence matrix is used for assessing the progress of diseased growth. The performance has been compared against other state-of-the-art models such as Fine-Tuned Neural Networks (FTNN), Convolutional Neural Network (CNN), Very Deep Convolutional Networks for Large-Scale Image Recognition developed by Visual Geometry Group (VGG), and convolutional neural network architecture that expanded with few changes. The HAM10000 dataset is used and the proposed method has outperformed other methods with more than 85% accuracy. Its robustness in recognizing the affected region much faster with almost 2× lesser computations than the conventional MobileNet model results in minimal computational efforts. Furthermore, a mobile application is designed for instant and proper action. It helps the patient and dermatologists identify the type of disease from the affected region’s image at the initial stage of the skin disease. These findings suggest that the proposed system can help general practitioners efficiently and effectively diagnose skin conditions, thereby reducing further complications and morbidity.


Author(s):  
Yuheng Hu ◽  
Yili Hong

Residents often rely on newspapers and television to gather hyperlocal news for community awareness and engagement. More recently, social media have emerged as an increasingly important source of hyperlocal news. Thus far, the literature on using social media to create desirable societal benefits, such as civic awareness and engagement, is still in its infancy. One key challenge in this research stream is to timely and accurately distill information from noisy social media data streams to community members. In this work, we develop SHEDR (social media–based hyperlocal event detection and recommendation), an end-to-end neural event detection and recommendation framework with a particular use case for Twitter to facilitate residents’ information seeking of hyperlocal events. The key model innovation in SHEDR lies in the design of the hyperlocal event detector and the event recommender. First, we harness the power of two popular deep neural network models, the convolutional neural network (CNN) and long short-term memory (LSTM), in a novel joint CNN-LSTM model to characterize spatiotemporal dependencies for capturing unusualness in a region of interest, which is classified as a hyperlocal event. Next, we develop a neural pairwise ranking algorithm for recommending detected hyperlocal events to residents based on their interests. To alleviate the sparsity issue and improve personalization, our algorithm incorporates several types of contextual information covering topic, social, and geographical proximities. We perform comprehensive evaluations based on two large-scale data sets comprising geotagged tweets covering Seattle and Chicago. We demonstrate the effectiveness of our framework in comparison with several state-of-the-art approaches. We show that our hyperlocal event detection and recommendation models consistently and significantly outperform other approaches in terms of precision, recall, and F-1 scores. Summary of Contribution: In this paper, we focus on a novel and important, yet largely underexplored application of computing—how to improve civic engagement in local neighborhoods via local news sharing and consumption based on social media feeds. To address this question, we propose two new computational and data-driven methods: (1) a deep learning–based hyperlocal event detection algorithm that scans spatially and temporally to detect hyperlocal events from geotagged Twitter feeds; and (2) A personalized deep learning–based hyperlocal event recommender system that systematically integrates several contextual cues such as topical, geographical, and social proximity to recommend the detected hyperlocal events to potential users. We conduct a series of experiments to examine our proposed models. The outcomes demonstrate that our algorithms are significantly better than the state-of-the-art models and can provide users with more relevant information about the local neighborhoods that they live in, which in turn may boost their community engagement.


Sign in / Sign up

Export Citation Format

Share Document