scholarly journals Learning to Classify Documents According to Formal and Informal Style

2012 ◽  
Vol 8 ◽  
Author(s):  
Fadi Abu Sheikha ◽  
Diana Inkpen

This paper discusses an important issue in computational linguistics: classifying texts as formal or informal style. Our work describes a genre-independent methodology for building classifiers for formal and informal texts. We used machine learning techniques to do the automatic classification, and performed the classification experiments at both the document level and the sentence level. First, we studied the main characteristics of each style, in order to train a system that can distinguish between them. We then built two datasets: the first dataset represents general-domain documents of formal and informal style, and the second represents medical texts. We tested on the second dataset at the document level, to determine if our model is sufficiently general, and that it works on any type of text. The datasets are built by collecting documents for both styles from different sources. After collecting the data, we extracted features from each text. The features that we designed represent the main characteristics of both styles. Finally, we tested several classification algorithms, namely Decision Trees, Naïve Bayes, and Support Vector Machines, in order to choose the classifier that generates the best classification results.

2019 ◽  
Vol 8 (4) ◽  
pp. 5813-5816

Now a days there is lots of data floating in the life of world access i.e Internet which is unstructured data.To manage this unstructured data we are introduced some classification algorithms in machine learning to classify the data.Sentiment Analysis[5] is contextual mining of text from documents ,reviews of customers which distinguishes and concentrates emotional data in source material. Assessment API works in fourteen unique dialects .We consider the issue of grouping records not by subject, however by generally speaking slant, e.g., deciding if an audit is certain or negative. Utilizing antiperspirants surveys as information, we locate that standard AI systems absolutely beat human-delivered baselines. The AI stratagies we connected with for arrangement are Naive Bayes, maximum entropy[2] classification, and support vector machines classification algorithms for sentiment classification as on traditional topic-based categorization.[1].


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Tomoaki Mameno ◽  
Masahiro Wada ◽  
Kazunori Nozaki ◽  
Toshihito Takahashi ◽  
Yoshitaka Tsujioka ◽  
...  

AbstractThe purpose of this retrospective cohort study was to create a model for predicting the onset of peri-implantitis by using machine learning methods and to clarify interactions between risk indicators. This study evaluated 254 implants, 127 with and 127 without peri-implantitis, from among 1408 implants with at least 4 years in function. Demographic data and parameters known to be risk factors for the development of peri-implantitis were analyzed with three models: logistic regression, support vector machines, and random forests (RF). As the results, RF had the highest performance in predicting the onset of peri-implantitis (AUC: 0.71, accuracy: 0.70, precision: 0.72, recall: 0.66, and f1-score: 0.69). The factor that had the most influence on prediction was implant functional time, followed by oral hygiene. In addition, PCR of more than 50% to 60%, smoking more than 3 cigarettes/day, KMW less than 2 mm, and the presence of less than two occlusal supports tended to be associated with an increased risk of peri-implantitis. Moreover, these risk indicators were not independent and had complex effects on each other. The results of this study suggest that peri-implantitis onset was predicted in 70% of cases, by RF which allows consideration of nonlinear relational data with complex interactions.


2014 ◽  
Vol 28 (2) ◽  
pp. 3-28 ◽  
Author(s):  
Hal R. Varian

Computers are now involved in many economic transactions and can capture data associated with these transactions, which can then be manipulated and analyzed. Conventional statistical and econometric techniques such as regression often work well, but there are issues unique to big datasets that may require different tools. First, the sheer size of the data involved may require more powerful data manipulation tools. Second, we may have more potential predictors than appropriate for estimation, so we need to do some kind of variable selection. Third, large datasets may allow for more flexible relationships than simple linear models. Machine learning techniques such as decision trees, support vector machines, neural nets, deep learning, and so on may allow for more effective ways to model complex relationships. In this essay, I will describe a few of these tools for manipulating and analyzing big data. I believe that these methods have a lot to offer and should be more widely known and used by economists.


2018 ◽  
Vol 7 (2.8) ◽  
pp. 684 ◽  
Author(s):  
V V. Ramalingam ◽  
Ayantan Dandapath ◽  
M Karthik Raja

Heart related diseases or Cardiovascular Diseases (CVDs) are the main reason for a huge number of death in the world over the last few decades and has emerged as the most life-threatening disease, not only in India but in the whole world. So, there is a need of reliable, accurate and feasible system to diagnose such diseases in time for proper treatment. Machine Learning algorithms and techniques have been applied to various medical datasets to automate the analysis of large and complex data. Many researchers, in recent times, have been using several machine learning techniques to help the health care industry and the professionals in the diagnosis of heart related diseases. This paper presents a survey of various models based on such algorithms and techniques andanalyze their performance. Models based on supervised learning algorithms such as Support Vector Machines (SVM), K-Nearest Neighbour (KNN), NaïveBayes, Decision Trees (DT), Random Forest (RF) and ensemble models are found very popular among the researchers.


Author(s):  
Hesham M. Al-Ammal

Detection of anomalies in a given data set is a vital step in several applications in cybersecurity; including intrusion detection, fraud, and social network analysis. Many of these techniques detect anomalies by examining graph-based data. Analyzing graphs makes it possible to capture relationships, communities, as well as anomalies. The advantage of using graphs is that many real-life situations can be easily modeled by a graph that captures their structure and inter-dependencies. Although anomaly detection in graphs dates back to the 1990s, recent advances in research utilized machine learning methods for anomaly detection over graphs. This chapter will concentrate on static graphs (both labeled and unlabeled), and the chapter summarizes some of these recent studies in machine learning for anomaly detection in graphs. This includes methods such as support vector machines, neural networks, generative neural networks, and deep learning methods. The chapter will reflect the success and challenges of using these methods in the context of graph-based anomaly detection.


2020 ◽  
Vol 24 (5) ◽  
pp. 1141-1160
Author(s):  
Tomás Alegre Sepúlveda ◽  
Brian Keith Norambuena

In this paper, we apply sentiment analysis methods in the context of the first round of the 2017 Chilean elections. The purpose of this work is to estimate the voting intention associated with each candidate in order to contrast this with the results from classical methods (e.g., polls and surveys). The data are collected from Twitter, because of its high usage in Chile and in the sentiment analysis literature. We obtained tweets associated with the three main candidates: Sebastián Piñera (SP), Alejandro Guillier (AG) and Beatriz Sánchez (BS). For each candidate, we estimated the voting intention and compared it to the traditional methods. To do this, we first acquired the data and labeled the tweets as positive or negative. Afterward, we built a model using machine learning techniques. The classification model had an accuracy of 76.45% using support vector machines, which yielded the best model for our case. Finally, we use a formula to estimate the voting intention from the number of positive and negative tweets for each candidate. For the last period, we obtained a voting intention of 35.84% for SP, compared to a range of 34–44% according to traditional polls and 36% in the actual elections. For AG we obtained an estimate of 37%, compared with a range of 15.40% to 30.00% for traditional polls and 20.27% in the elections. For BS we obtained an estimate of 27.77%, compared with the range of 8.50% to 11.00% given by traditional polls and an actual result of 22.70% in the elections. These results are promising, in some cases providing an estimate closer to reality than traditional polls. Some differences can be explained due to the fact that some candidates have been omitted, even though they held a significant number of votes.


2020 ◽  
Vol 17 (8) ◽  
pp. 3598-3604
Author(s):  
M. S. Roobini ◽  
M. Lakshmi

Alzheimer’s Disease (AD) is a standout amongst the most familiar types of memory loss influencing a huge number of senior individuals around the world which is the main source of dementia and memory misfortune. AD causes shrinkage in hippocampus and cerebral cortex and it grows the ventricles in the mind Enhancing home and network based composed consideration is basic to alleviating Alzheimer’s impacts on people and families and to decreasing mounting medicinal services costs. Distinguishing early morphological changes in the mind and making early determination are vital for Alzheimer’s ailment (AD). A few machine learning techniques, for example, Support vector machines have been utilized and a portion of these strategies have been appeared to be extremely compelling in diagnosing AD from neuroimages, some of the time significantly more viable than human radiologists. MRI uncover the data of AD however decay districts are diverse for various individuals which makes the finding somewhat trickier. By utilizing Convolutional Neural Networks, the issue can be settled with insignificant mistake rate. This paper proposes a profound Convolutional Neural Network (CNN) for Alzheimer’s Disease finding utilizing mind MRI information examination. The calculation was prepared and tried utilizing the MRI information from Alzheimer’s Disease Neuroimaging Initiative.


2020 ◽  
Vol 8 (5) ◽  
pp. 4624-4627

In recent years, a lot of data has been generated about students, which can be utilized for deciding the career path of the student. This paper discusses some of the machine learning techniques which can be used to predict the performance of a student and help to decide his/her career path. Some of the key Machine Learning (ML) algorithms applied in our research work are Linear Regression, Logistics Regression, Support Vector machine, Naïve Bayes Classifier and K- means Clustering. The aim of this paper is to predict the student career path using Machine Learning algorithms. We compare the efficiencies of different ML classification algorithms on a real dataset obtained from University students.


Author(s):  
Nurul Farhana Hamzah ◽  
◽  
Nazri Mohd Nawi ◽  
Abdulkareem A. Hezam ◽  
◽  
...  

Heart failure means that the heart is not pumping well as normal as it should be. A congestive heart failure is a form of heart failure that involves seeking timely medical care, although the two terms are sometimes used interchangeably. Heart failure happens when the heart muscle does not pump blood as well as it can, often referred to as congestive heart failure. Some disorders, such as heart's narrowed arteries (coronary artery disease) or high blood pressure, eventually make the heart too weak or rigid to fill and pump effectively. Early detection of heart failure by using data mining techniques has gained popularity among researchers. This research uses some classification techniques for heart failure classification from medical data. This research analyzed the performance of some classification algorithms, namely Support Vector Machine (SVM), Decision Forest (DF), and Boosted Decision Tree (BDT), to classify accurately heart failure risk data as input. The best algorithm among the three is discovered for heart failure classification at the end of this research.


Advancement in medical science has always been one of the most vital aspects of the human race. With the progress in technology, the use of modern techniques and equipment is always imposed on treatment purposes. Nowadays, machine learning techniques have widely been used in medical science for assuring accuracy. In this work, we have constructed computational model building techniques for liver disease prediction accurately. We used some efficient classification algorithms: Random Forest, Perceptron, Decision Tree, K-Nearest Neighbors (KNN), and Support Vector Machine (SVM) for predicting liver diseases. Our works provide the implementation of hybrid model construction and comparative analysis for improving prediction performance. At first, classification algorithms are applied to the original liver patient datasets collected from the UCI repository. Then we analyzed features and tweaked to improve the performance of our predictor and made a comparative analysis among the classifiers. We examined that, KNN algorithm outperformed all other techniques with feature selection.


Sign in / Sign up

Export Citation Format

Share Document