scholarly journals The State Art of Text Sentiment from Opinions to Emotion Mining

2020 ◽  
Vol 5 (2) ◽  
pp. 43-52
Author(s):  
Nor Anis Asma Sulaiman ◽  
◽  
Leelavathi Rajamanickam ◽  

This study is aiming to analyse the feelings expressed by the users in a text on a comment posted on social media. Text Mining and Emotion Mining can be analysed by using both technique of Natural Processing Language (NLP). Mostly on the previous study of text mining is using unsupervised technique and referring to Ekman’s Emotion Model (EEM) but it has restrained coverage of polarity shifters, negations and lack emoticon. In this study have proposed a Naïve Bayes algorithm as a tool to produce users’ emotion pattern. The most important contribution of this study is to visualize the emotion’s theory with the text sentiment based on the computational methods for classifying users’ feelings from natural language text. Then, the general system framework of extracting opinions to emotion mining has produced and capable use in any domains.

2016 ◽  
Vol 11 (2) ◽  
pp. 198-206 ◽  
Author(s):  
Shosuke Sato ◽  
◽  
Kazumasa Hanaoka ◽  
Makoto Okumura ◽  
Shunichi Koshimura

There are increasing expectations that social sensing, especially the analysis of social media text as a source of information for COP (Common Operational Picture), is useful for decision-making about responses to disasters. This paper reports on a geo-information and content analysis of three million Twitter texts sampled from Japanese Twitter accounts for one month before and after the 2011 Great East Japan Earthquake disaster. The results are as follows. 1) The number of Twitter texts that include geotag (latitude and longitude information) is too small for reliable analysis. However, a method of detecting the tweet’s location from the tweet’s text using GeoNLP (an automatic technology to tag geo-information from natural language text) is able to identify geo-information, and we have confirmed that many tweets were sent from stricken areas. 2) A comparison of Twitter data distribution before and after the disaster occurred does not identify clearly which areas were significantly affected by the disaster. 3) There were very few Twitter texts that included information about the damage in affected areas and their support needs.


2021 ◽  
Author(s):  
Diogo J. S. Machado ◽  
Camilla Reginatto De Pierri ◽  
Leticia Graziela Costa Santos ◽  
Fabio O. Pedrosa ◽  
Roberto Tadeu Raittz

The large amount of existing textual data justifies the development of new text mining tools. Bioinformatics tools can be brought to Text Mining, increasing the arsenal of resources. Here, we present Biotext, a package of strategies for converting natural language text into biological-like information data, providing a general protocol with standardized functions, allowing to share, encode and decode textual data for amino acid data. The package was used to encode the arbitrary information present in the headings of the biological sequences found in a BLAST survey. The protocol implemented in this study consists of 12 steps, which can be easily executed and/ or changed by the user, depending on the study area. Biotext empowers user to perform text mining using bioinformatics tools. Biotext is Freely available at https://pypi.org/project/biotext/ (Python package) and https://sourceforge.net/projects/biotext-tools/files/AMINOcode_GUI/ (Standalone tool).


Author(s):  
A. Durfee ◽  
A. Visa ◽  
H. Vanharanta ◽  
S. Schneberger ◽  
B. Back

Text documents are the most common means for exchanging formal knowledge among people. Text is a rich medium that can contain a vast range of information, but text can be difficult to decipher automatically. Many organizations have vast repositories of textual data but with few means of automatically mining that text. Text mining methods seek to use an understanding of natural language text to extract information relevant to user needs. This article evaluates a new text mining methodology: prototypematching for text clustering, developed by the authors’ research group. The methodology was applied to four applications: clustering documents based on their abstracts, analyzing financial data, distinguishing authorship, and evaluating multiple translation similarity. The results are discussed in terms of common business applications and possible future research.


Author(s):  
Matheus C. Pavan ◽  
Vitor G. Santos ◽  
Alex G. J. Lan ◽  
Joao Martins ◽  
Wesley Ramos Santos ◽  
...  

2012 ◽  
Vol 30 (1) ◽  
pp. 1-34 ◽  
Author(s):  
Antonio Fariña ◽  
Nieves R. Brisaboa ◽  
Gonzalo Navarro ◽  
Francisco Claude ◽  
Ángeles S. Places ◽  
...  

Author(s):  
Maurizio Romano ◽  
Francesco Mola ◽  
Claudio Conversano

The importance of the Word of Mouth is growing day by day in many topics. This phenomenon is evident in everyday life, e.g., the rise of influencers and social media managers. If more people positively debate specific products, then even more people are encouraged to buy them and vice versa. This effect is directly affected by the relationship between the potential customer and the reviewer. Moreover, considering the negative reporting bias is evident in how the Word of Mouth analysis is of absolute interest in many fields. We propose an algorithm to extract the sentiment from a natural language text corpus. The combined approach of Neural Networks, with high predictive power but more challenging interpretation, with more simple but informative models, allows us to quantify a sentiment with a numeric value and to predict if a sentence has a positive (negative) sentiment. The assessment of an objective quantity improves the interpretation of the results in many fields. For example, it is possible to identify crucial specific sectors that require intervention, improving the company's services whilst finding the strengths of the company himself (useful for advertising campaigns). Moreover, considering that the time information is usually available in textual data with a web origin, to analyze trends on macro/micro topics. After showing how to properly reduce the dimensionality of the textual data with a data-cleaning phase, we show how to combine: WordEmbedding, K-Means clustering, SentiWordNet, and the Threshold-based Naïve Bayes classifier. We apply this method to Booking.com and TripAdvisor.com data, analyzing the sentiment of people who discuss a particular issue, providing an example of customer satisfaction.


Author(s):  
Antti Vehviläinen ◽  
Eero Hyvönen ◽  
Olli Alm

This chapter discusses how knowledge technologies can be utilized in creating help desk services on the Semantic Web. To ease the content indexer’s work, we propose semi-automatic semantic annotation of natural language text for annotating question-answer (QA) pairs, and case-based reasoning techniques for finding similar questions. To provide answers matching the content indexer’s and end-user’s information needs, methods for combining case-based reasoning with semantic search, linking, and authoring are proposed. We integrate different data sources by using large ontologies. Techniques to utilize these sources in authoring answers are suggested. A prototype implementation of a real life ontology-based help desk application, based on an existing national library help desk service in Finland, is presented as a proof of concept.


Sign in / Sign up

Export Citation Format

Share Document