scholarly journals A Comparative Study of Detection and Classification of Emotions on Social Media Using SVM and Näıve Bayes Techniques

2021 ◽  
Author(s):  
Aysha A ◽  
Syed Meeral MK ◽  
Bushra KM

The rapid rate of innovations and dynamics of technology has made humans life more dependent on them. In today’s synopsis Microblogging and Social networking sites like Twitter, Facebook are a part of our lives that cannot be detached from anyone. Through these social media each one of them carry their emotions and fix their opinions based on a particular situations or circumstances. This paper presents a brief comparison about Detection and Classification of Emotions on Social Media using SVM and Näıve Bayesian classifier. Twitter messages has been used as input dataset because they contain a broad, varied, and freely accessible set of emotions. The approach uses hash-tags as labels to train supervised classifiers to detect multiple classes of emotion on potentially large data sets without the need for manual intervention. We look into the usefulness of a number of features for detecting emotions, including unigrams, unigram symbol, negations and punctuations using SVM and Näıve Bayesian Classifiers.

Author(s):  
Adam Kiersztyn ◽  
Pawe Karczmarek ◽  
Krystyna Kiersztyn ◽  
Witold Pedrycz

2021 ◽  
Vol 251 ◽  
pp. 02054
Author(s):  
Olga Sunneborn Gudnadottir ◽  
Daniel Gedon ◽  
Colin Desmarais ◽  
Karl Bengtsson Bernander ◽  
Raazesh Sainudiin ◽  
...  

In recent years, machine-learning methods have become increasingly important for the experiments at the Large Hadron Collider (LHC). They are utilised in everything from trigger systems to reconstruction and data analysis. The recent UCluster method is a general model providing unsupervised clustering of particle physics data, that can be easily modified to provide solutions for a variety of different decision problems. In the current paper, we improve on the UCluster method by adding the option of training the model in a scalable and distributed fashion, and thereby extending its utility to learn from arbitrarily large data sets. UCluster combines a graph-based neural network called ABCnet with a clustering step, using a combined loss function in the training phase. The original code is publicly available in TensorFlow v1.14 and has previously been trained on a single GPU. It shows a clustering accuracy of 81% when applied to the problem of multi-class classification of simulated jet events. Our implementation adds the distributed training functionality by utilising the Horovod distributed training framework, which necessitated a migration of the code to TensorFlow v2. Together with using parquet files for splitting data up between different compute nodes, the distributed training makes the model scalable to any amount of input data, something that will be essential for use with real LHC data sets. We find that the model is well suited for distributed training, with the training time decreasing in direct relation to the number of GPU’s used. However, further improvements by a more exhaustive and possibly distributed hyper-parameter search is required in order to achieve the reported accuracy of the original UCluster method.


2012 ◽  
Vol 7 (1) ◽  
pp. 174-197 ◽  
Author(s):  
Heather Small ◽  
Kristine Kasianovitz ◽  
Ronald Blanford ◽  
Ina Celaya

Social networking sites and other social media have enabled new forms of collaborative communication and participation for users, and created additional value as rich data sets for research. Research based on accessing, mining, and analyzing social media data has risen steadily over the last several years and is increasingly multidisciplinary; researchers from the social sciences, humanities, computer science and other domains have used social media data as the basis of their studies. The broad use of this form of data has implications for how curators address preservation, access and reuse for an audience with divergent disciplinary norms related to privacy, ownership, authenticity and reliability.In this paper, we explore how the characteristics of the Twitter platform, coupled with an ambiguous and evolving understanding of privacy in networked communication, and divergent disciplinary understandings of the resulting data, combine to create complex issues for curators trying to ensure broad-based and ethical reuse of Twitter data. We provide a case study of a specific data set to illustrate how data curators can engage with the topics and questions raised in the paper. While some initial suggestions are offered to librarians and other information professionals who are beginning to receive social media data from researchers, our larger goal is to stimulate discussion and prompt additional research on the curation and preservation of social media data.


2012 ◽  
Vol 4 (4) ◽  
pp. 15-30 ◽  
Author(s):  
John Haggerty ◽  
Mark C. Casson ◽  
Sheryllynne Haggerty ◽  
Mark J. Taylor

The increasing use of social media, applications or platforms that allow users to interact online, ensures that this environment will provide a useful source of evidence for the forensics examiner. Current tools for the examination of digital evidence find this data problematic as they are not designed for the collection and analysis of online data. Therefore, this paper presents a framework for the forensic analysis of user interaction with social media. In particular, it presents an inter-disciplinary approach for the quantitative analysis of user engagement to identify relational and temporal dimensions of evidence relevant to an investigation. This framework enables the analysis of large data sets from which a (much smaller) group of individuals of interest can be identified. In this way, it may be used to support the identification of individuals who might be ‘instigators’ of a criminal event orchestrated via social media, or a means of potentially identifying those who might be involved in the ‘peaks’ of activity. In order to demonstrate the applicability of the framework, this paper applies it to a case study of actors posting to a social media Web site.


2018 ◽  
Vol 7 (2.7) ◽  
pp. 786 ◽  
Author(s):  
T Sajana ◽  
M R.Narasingarao

Malaria disease is one whose presence is rampant in semi urban and non-urban areas especially resource poor developing countries. It is quite evident from the datasets like malaria, dengue, etc., where there is always a possibility of having more negative patients (non-occurrence of the disease) compared to patients suffering from disease (positive cases). Developing a model based decision support system with such unbalanced datasets is a cause of concern and it is indeed necessary to have a model predicting the disease quite accurately. Classification of imbalanced malaria disease data become a crucial task in medical application domain because most of the conventional machine learning algorithms are showing very poor performance to classify whether a patient is affected by malaria disease or not. In imbalanced data, majority (unaffected) class samples are dominates the minority (affected) class samples leading to class imbalance. To overcome the nature of class imbalance problem, balancing the data samples is the best solution which produces the better accuracy in classification of minority samples. The aim of this research is to propose a comparative study on classifying the imbalanced malaria disease data using Naive Bayesian classifier in different environments like weka and using an R-language. We present here, clinical descriptive study on 165 patients of different age group people collected at medical wards of Narasaraopet from 2014-17. Synthetic Minority Oversampling Technique (SMOTE) technique has been used to balance the class distribution and then we performed a comparative study on the dataset using Naïve Bayesian algorithm in various platforms. Out of balanced class distribution data, 70% data was given to train the Naive Bayesian algorithm and the rest of the data was used for testing the model for both weka and R programming environments. Experimental results have indicated that, classification of malaria disease data in weka environment has highest accuracy of 88.5% than the Naive Bayesian algorithm accuracy of 87.5% using R programming language. The impact of vector borne disease is very high in medical applications. Prediction of disease like malaria is an hour of the need and this is possible only with a suitable model for a given dataset. Hence, we have developed a model with Naive Bayesian algorithm is used for current research.    


Sign in / Sign up

Export Citation Format

Share Document