scholarly journals Negation Detection on Mexican Spanish Tweets: The T-MexNeg Corpus

2021 ◽  
Vol 11 (9) ◽  
pp. 3880
Author(s):  
Gemma Bel-Enguix ◽  
Helena Gómez-Adorno ◽  
Alejandro Pimentel ◽  
Sergio-Luis Ojeda-Trueba ◽  
Brian Aguilar-Vizuet

In this paper, we introduce the T-MexNeg corpus of Tweets written in Mexican Spanish. It consists of 13,704 Tweets, of which 4895 contain negation structures. We performed an analysis of negation statements embedded in the language employed on social media. This research paper aims to present the annotation guidelines along with a novel resource targeted at the negation detection task. The corpus was manually annotated with labels of negation cue, scope, and, event. We report the analysis of the inter-annotator agreement for all the components of the negation structure. This resource is freely available. Furthermore, we performed various experiments to automatically identify negation using the T-MexNeg corpus and the SFU ReviewSP-NEG for training a machine learning algorithm. By comparing two different methodologies, one based on a dictionary and the other based on the Conditional Random Fields algorithm, we found that the results of negation identification on Twitter are lower when the model is trained on the SFU ReviewSP-NEG Corpus. Therefore, this paper shows the importance of having resources built specifically to deal with social media language.

In today’s world social media is one of the most important tool for communication that helps people to interact with each other and share their thoughts, knowledge or any other information. Some of the most popular social media websites are Facebook, Twitter, Whatsapp and Wechat etc. Since, it has a large impact on people’s daily life it can be used a source for any fake or misinformation. So it is important that any information presented on social media should be evaluated for its genuineness and originality in terms of the probability of correctness and reliability to trust the information exchange. In this work we have identified the features that can be helpful in predicting whether a given Tweet is Rumor or Information. Two machine learning algorithm are executed using WEKA tool for the classification that is Decision Tree and Support Vector Machine.


Author(s):  
Petr Berka ◽  
Ivan Bruha

The genuine symbolic machine learning (ML) algorithms are capable of processing symbolic, categorial data only. However, real-world problems, e.g. in medicine or finance, involve both symbolic and numerical attributes. Therefore, there is an important issue of ML to discretize (categorize) numerical attributes. There exist quite a few discretization procedures in the ML field. This paper describes two newer algorithms for categorization (discretization) of numerical attributes. The first one is implemented in the KEX (Knowledge EXplorer) as its preprocessing procedure. Its idea is to discretize the numerical attributes in such a way that the resulting categorization corresponds to KEX knowledge acquisition algorithm. Since the categorization for KEX is done "off-line" before using the KEX machine learning algorithm, it can be used as a preprocessing step for other machine learning algorithms, too. The other discretization procedure is implemented in CN4, a large extension of the well-known CN2 machine learning algorithm. The range of numerical attributes is divided into intervals that may form a complex generated by the algorithm as a part of the class description. Experimental results show a comparison of performance of KEX and CN4 on some well-known ML databases. To make the comparison more exhibitory, we also used the discretization procedure of the MLC++ library. Other ML algorithms such as ID3 and C4.5 were run under our experiments, too. Then, the results are compared and discussed.


Author(s):  
Jānis Kapenieks

INTRODUCTION Opinion analysis in the big data analysis context has been a hot topic in science and the business world recently. Social media has become a key data source for opinions generating a large amount of data every day providing content for further analysis. In the Big data age, unstructured data classification is one of the key tools for fast and reliable content analysis. I expect significant growth in the demand for content classification services in the nearest future. There are many online text classification tools available providing limited functionality -such as automated text classification in predefined categories and sentiment analysis based on a pre-trained machine learning algorithm. The limited functionality does not provide tools such as data mining support and/or a machine learning algorithm training interface. There are a limited number of tools available providing the whole sets of tools required for text classification, i.e. this includes all the steps starting from data mining till building a machine learning algorithm and applying it to a data stream from a social network source. My goal is to create a tool able to generate a classified text stream directly from social media with a user friendly set-up interface. METHODS AND MATERIALS The text classification tool will have a core based modular structure (each module providing certain functionality) so the system can be scaled in terms of technology and functionality. The tool will be built on open source libraries and programming languages running on a Linux OS based server. The tool will be based on three key components: frontend, backend and data storage as described below: backend: Python and Nodejs programming language with machine learning and text filtering libraries: TensorFlow, and Keras, for data storage Mysql 5.7/8 will be used, frontend will be based on web technologies built using PHP and Javascript. EXPECTED RESULTS The expected result of my work is a web-based text classification tool for opinion analysis using data streams from social media. The tool will provide a user friendly interface for data collection, algorithm selection, machine learning algorithm setup and training. Multiple text classification algorithms will be available as listed below: Linear SVM Random Forest Multinomial Naive Bayes Bernoulli Naive Bayes Ridge Regressio Perceptron Passive Aggressive Classifier Deep machine learning algorithm. System users will be able to identify the most effective algorithm for their text classification task and compare them based on their accuracy. The architecture of the text classification tool will be based on a frontend interface and backend services. The frontend interface will provide all the tools the system user will be interacting with the system. This includes setting up data collection streams from multiple social networks and allocating them to pre-specified channels based on keywords. Data from each channel can be classified and assigned to a pre-defined cluster. The tool will provide a training interface for machine learning algorithms. This text classification tool is currently in active development for a client with planned testing and implementation in April 2019.


2018 ◽  
Vol 1 (2) ◽  
pp. 24-32
Author(s):  
Lamiaa Abd Habeeb

In this paper, we designed a system that extract citizens opinion about Iraqis government and Iraqis politicians through analyze their comments from Facebook (social media network). Since the data is random and contains noise, we cleaned the text and builds a stemmer to stem the words as much as possible, cleaning and stemming reduced the number of vocabulary from 28968 to 17083, these reductions caused reduction in memory size from 382858 bytes to 197102 bytes. Generally, there are two approaches to extract users opinion; namely, lexicon-based approach and machine learning approach. In our work, machine learning approach is applied with three machine learning algorithm which are; Naïve base, K-Nearest neighbor and AdaBoost ensemble machine learning algorithm. For Naïve base, we apply two models; Bernoulli and Multinomial models. We found that, Naïve base with Multinomial models give highest accuracy.


Author(s):  
Sushant Keni ◽  
Priyanka Jadhav ◽  
Mayur Patil ◽  
Prof. Sonal Chaudhari

We evaluate the feasibility of using Facebook data to enhance the effectiveness of a recruitment system, especially for résumé verification and recognize the personality by using social network analysis methods. In the industries employee’s personality is very important in the workplace which will help to growth of the company and give more good service to the client. Currently resume verification is based on trustful third parties who does background verification. Based on this report is sent to the company who is hiring the employee decides to keep employee or not. This manual system usually takes lots of time and this system generally wont display candidates’ nature towards society (in short how he behaves in society weather he posts something wrong on social media in simple words his/her personality). Social media now a days is huge platform where user generally spends too much time on social media like Facebook, LinkedIn etc. like posting a page, commenting, liking the post, certification uploading, adding friends. We are going to design such a system that verifies genuineness of user by scraping or exploring data from Facebook or LinkedIn or both. we are exploring post of person and classifies it into is it technology related, violence related and many more what are the comments he gives on his post how he reacts his language of handling a query will be parsed and classified using machine learning algorithm of previously trained dataset using SVM. And at the end we will show this information to the company to make their own decision based on this result.


2021 ◽  
Vol 22 (1) ◽  
pp. 78-92
Author(s):  
GA Buntoro ◽  
R Arifin ◽  
GN Syaifuddiin ◽  
A Selamat ◽  
O Krejcar ◽  
...  

In 2019, citizens of Indonesia participated in the democratic process of electing a new president, vice president, and various legislative candidates for the country. The 2019 Indonesian presidential election was very tense in terms of the candidates' campaigns in cyberspace, especially on social media sites such as Facebook, Twitter, Instagram, Google+, Tumblr, LinkedIn, etc. The Indonesian people used social media platforms to express their positive, neutral, and also negative opinions on the respective presidential candidates. The campaigning of respective social media users on their choice of candidates for regents, governors, and legislative positions up to presidential candidates was conducted via the Internet and online media. Therefore, the aim of this paper is to conduct sentiment analysis on the candidates in the 2019 Indonesia presidential election based on Twitter datasets. The study used datasets on the opinions expressed by the Indonesian people available on Twitter with the hashtags (#) containing "Jokowi and Prabowo." We conducted data pre-processing using a selection of comments, data cleansing, text parsing, sentence normalization and tokenization based on the given text in the Indonesian language, determination of class attributes, and, finally, we classified the Twitter posts with the hashtags (#) using Naïve Bayes Classifier (NBC) and a Support Vector Machine (SVM) to achieve an optimal and maximum optimization accuracy. The study provides benefits in terms of helping the community to research opinions on Twitter that contain positive, neutral, or negative sentiments. Sentiment Analysis on the candidates in the 2019 Indonesian presidential election on Twitter using non-conventional processes resulted in cost, time, and effort savings. This research proved that the combination of the SVM machine learning algorithm and alphabetic tokenization produced the highest accuracy value of 79.02%. While the lowest accuracy value in this study was obtained with a combination of the NBC machine learning algorithm and N-gram tokenization with an accuracy value of 44.94%. ABSTRAK: Pada tahun 2019 rakyat Indonesia telah terlibat dalam proses demokrasi memilih presiden baru, wakil presiden, dan berbagai calon legislatif negara. Pemilihan presiden Indonesia 2019 sangat tegang dalam kempen calon di ruang siber, terutama di laman media sosial seperti Facebook, Twitter, Instagram, Google+, Tumblr, LinkedIn, dll. Rakyat Indonesia menggunakan platfom media sosial bagi menyatakan pendapat positif, berkecuali, dan juga negatif terhadap calon presiden masing-masing. Kampen pencalonan menteri, gabenor, dan perundangan hingga pencalonan presiden dilakukan melalui media internet dan atas talian. Oleh itu, kajian ini dilakukan bagi menilai sentimen terhadap calon pemilihan presiden Indonesia 2019 berdasarkan kumpulan data Twitter. Kajian ini menggunakan kumpulan data yang diungkapkan oleh rakyat Indonesia yang terdapat di Twitter dengan hashtag (#) yang mengandungi "Jokowi dan Prabowo." Proses data dibuat menggunakan pilihan komentar, pembersihan data, penguraian teks, normalisasi kalimat, dan tokenisasi teks dalam bahasa Indonesia, penentuan atribut kelas, dan akhirnya, pengklasifikasian catatan Twitter dengan hashtag (#) menggunakan Klasifikasi Naïve Bayes (NBC) dan Mesin Vektor Sokongan (SVM) bagi mencapai ketepatan optimum dan maksimum. Kajian ini memberikan faedah dari segi membantu masyarakat meneliti pendapat di Twitter yang mengandungi sentimen positif, neutral, atau negatif. Analisis Sentimen terhadap calon dalam pemilihan presiden Indonesia 2019 di Twitter menggunakan proses bukan konvensional menghasilkan penjimatan kos, waktu, dan usaha. Penyelidikan ini membuktikan bahawa gabungan algoritma pembelajaran mesin SVM dan tokenisasi abjad menghasilkan nilai ketepatan tertinggi iaitu 79.02%. Manakala nilai ketepatan terendah dalam kajian ini diperoleh dengan kombinasi algoritma pembelajaran mesin NBC dan tokenisasi N-gram dengan nilai ketepatan 44.94%.


Author(s):  
Dharmendra Sharma

In this chapter, we propose a multi-agent-based information technology (IT) security approach (MAITS) as a holistic solution to the increasing needs of securing computer systems. Each specialist task for security requirements is modeled as a specialist agent. MAITS has five groups of working agents—administration assistant agents, authentication and authorization agents, system log *monitoring agents, intrusion detection agents, and pre-mortem-based computer forensics agents. An assessment center, which is comprised of yet another special group of agents, plays a key role in coordinating the interaction of the other agents. Each agent has an agent engine of an appropriate machine-learning algorithm. The engine enables the agent with learning, reasoning, and decision-making abilities. Each agent also has an agent interface, through which the agent interacts with other agents and also the environment.


2019 ◽  
Vol 9 (24) ◽  
pp. 5369
Author(s):  
Alessio Alexiadis

There are two common ways of coupling first-principles modelling and machine learning. In one case, data are transferred from the machine-learning algorithm to the first-principles model; in the other, from the first-principles model to the machine-learning algorithm. In both cases, the coupling is in series: the two components remain distinct, and data generated by one model are subsequently fed into the other. Several modelling problems, however, require in-parallel coupling, where the first-principle model and the machine-learning algorithm work together at the same time rather than one after the other. This study introduces deep multiphysics; a computational framework that couples first-principles modelling and machine learning in parallel rather than in series. Deep multiphysics works with particle-based first-principles modelling techniques. It is shown that the mathematical algorithms behind several particle methods and artificial neural networks are similar to the point that can be unified under the notion of particle–neuron duality. This study explains in detail the particle–neuron duality and how deep multiphysics works both theoretically and in practice. A case study, the design of a microfluidic device for separating cell populations with different levels of stiffness, is discussed to achieve this aim.


Sign in / Sign up

Export Citation Format

Share Document