Machine learning techniques for semantic analysis of dysarthric speech: An experimental study

2018 ◽  
Vol 99 ◽  
pp. 242-251 ◽  
Author(s):  
Vladimir Despotovic ◽  
Oliver Walter ◽  
Reinhold Haeb-Umbach
2019 ◽  
Vol 8 (2) ◽  
pp. 4833-4837

Technology is growing day by day and the influence of them on our day-to-day life is reaching new heights in the digitized world. Most of the people are prone to the use of social media and even minute details are getting posted every second. Some even go to the extent of posting even suicide related issues. This paper addresses the issue of suicide and is predicting the suicide issues on social media and their semantic analysis. With the help of Machine Learning techniques and semantic analysis of sentiments the prediction and classification of suicide is done. The model of approach is a four-tier approach, which is very beneficial as it uses the twitter4J data by using weka tool and implementing it on WordNet. The precision and accuracy aspects are verified as the parameters for the performance efficiency of the procedure. We also give a solution for the lack of resources regarding the terminological resources by providing a phase for the generation of records of vocabulary also.


2022 ◽  
pp. 1-12
Author(s):  
Mohammed Hamdi

With the evaluation of the software industry, a huge number of software applications are designing, developing, and uploading to multiple online repositories. To find out the same type of category and resource utilization of applications, researchers must adopt manual working. To reduce their efforts, a solution has been proposed that works in two phases. In first phase, a semantic analysis-based keywords and variables identification process has been proposed. Based on the semantics, designed a dataset having two classes: one represents application type and the other corresponds to application keywords. Afterward, in second phase, input preprocessed dataset to manifold machine learning techniques (Decision Table, Random Forest, OneR, Randomizable Filtered Classifier, Logistic model tree) and compute their performance based on TP Rate, FP Rate, Precision, Recall, F1-Score, MCC, ROC Area, PRC Area, and Accuracy (%). For evaluation purposes, I have used an R language library called latent semantic analysis for creating semantics, and the Weka tool is used for measuring the performance of algorithms. Results show that the random forest depicts the highest accuracy which is 99.3% due to its parametric function evaluation and less misclassification error.


Author(s):  
Maria Frasca ◽  
Genoveffa Tortora

AbstractIn the last few years, the integration of researches in Computer Science and medical fields has made available to the scientific community an enormous amount of data, stored in databases. In this paper, we analyze the data available in the Parkinson’s Progression Markers Initiative (PPMI), a comprehensive observational, multi-center study designed to identify progression biomarkers important for better treatments for Parkinson’s disease. The data of PPMI participants are collected through a comprehensive battery of tests and assessments including Magnetic Resonance Imaging and DATscan imaging, collection of blood, cerebral spinal fluid, and urine samples, as well as cognitive and motor evaluations. To this aim, we propose a technique to identify a correlation between the biomedical data in the PPMI dataset for verifying the consistency of medical reports formulated during the visits and allow to correctly categorize the various patients. To correlate the information of each patient’s medical report, Information Retrieval and Machine Learning techniques have been adopted, including the Latent Semantic Analysis, Text2Vec and Doc2Vec techniques. Then, patients are grouped and classified into affected or not by using clustering algorithms according to the similarity of medical reports. Finally, we have adopted a visualization system based on the D3 framework to visualize correlations among medical reports with an interactive chart, and to support the doctor in analyzing the chronological sequence of visits in order to diagnose Parkinson’s disease early.


2021 ◽  
pp. 1-17
Author(s):  
Zeinab Shahbazi ◽  
Yung-Cheol Byun

Understanding the real-world short texts become an essential task in the recent research area. The document deduction analysis and latent coherent topic named as the important aspect of this process. Latent Dirichlet Allocation (LDA) and Probabilistic Latent Semantic Analysis (PLSA) are suggested to model huge information and documents. This type of contexts’ main problem is the information limitation, words relationship, sparsity, and knowledge extraction. The knowledge discovery and machine learning techniques integrated with topic modeling were proposed to overcome this issue. The knowledge discovery was applied based on the hidden information extraction to increase the suitable dataset for further analysis. The integration of machine learning techniques, Artificial Neural Network (ANN) and Long Short-Term (LSTM) are applied to anticipate topic movements. LSTM layers are fed with latent topic distribution learned from the pre-trained Latent Dirichlet Allocation (LDA) model. We demonstrate general information from different techniques applied in short text topic modeling. We proposed three categories based on Dirichlet multinomial mixture, global word co-occurrences, and self-aggregation using representative design and analysis of all categories’ performance in different tasks. Finally, the proposed system evaluates with state-of-art methods on real-world datasets, comprises them with long document topic modeling algorithms, and creates a classification framework that considers further knowledge and represents it in the machine learning pipeline.


2006 ◽  
Author(s):  
Christopher Schreiner ◽  
Kari Torkkola ◽  
Mike Gardner ◽  
Keshu Zhang

2020 ◽  
Vol 12 (2) ◽  
pp. 84-99
Author(s):  
Li-Pang Chen

In this paper, we investigate analysis and prediction of the time-dependent data. We focus our attention on four different stocks are selected from Yahoo Finance historical database. To build up models and predict the future stock price, we consider three different machine learning techniques including Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNN) and Support Vector Regression (SVR). By treating close price, open price, daily low, daily high, adjusted close price, and volume of trades as predictors in machine learning methods, it can be shown that the prediction accuracy is improved.


Diabetes ◽  
2020 ◽  
Vol 69 (Supplement 1) ◽  
pp. 389-P
Author(s):  
SATORU KODAMA ◽  
MAYUKO H. YAMADA ◽  
YUTA YAGUCHI ◽  
MASARU KITAZAWA ◽  
MASANORI KANEKO ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document