scholarly journals To Enhance the Quality of HCRS using Fuzzy-Genetic Approach

Author(s):  
Latha Banda ◽  
Karan Singh ◽  
Vikash Arya ◽  
Devendra Gautam ◽  
Ali Ahmadian

Abstract Social media is recent generation of Recommender Systems (RS). Health Care Recommender System (HCRS) term used to analyse the medical data and then predict the disease of a patient with the help of various techniques used in RS. To ensure the quality and trustworthiness of medical data, machine learning algorithms are applied. Even though, there is a much gap between health care diagnosis and IT solutions. To evade this gap, the hybrid Fuzzy-genetic approach is used in HCRS. In this, Genetic algorithm is used for similarity computations with the help of mutation and crossover operators. Later fuzzy rules are generated for the data set with the additional personalized information of a user. Considering these approaches, the proposed model enhances the quality of recommendation in HCRS.

2020 ◽  
Vol 17 (9) ◽  
pp. 4294-4298
Author(s):  
B. R. Sunil Kumar ◽  
B. S. Siddhartha ◽  
S. N. Shwetha ◽  
K. Arpitha

This paper intends to use distinct machine learning algorithms and exploring its multi-features. The primary advantage of machine learning is, a machine learning algorithm can predict its work automatically by learning what to do with information. This paper reveals the concept of machine learning and its algorithms which can be used for different applications such as health care, sentiment analysis and many more. Sometimes the programmers will get confused which algorithm to apply for their applications. This paper provides an idea related to the algorithm used on the basis of how accurately it fits. Based on the collected data, one of the algorithms can be selected based upon its pros and cons. By considering the data set, the base model is developed, trained and tested. Then the trained model is ready for prediction and can be deployed on the basis of feasibility.


2021 ◽  
Author(s):  
Ben Geoffrey A S

This work seeks to combine the combined advantage of leveraging these emerging areas of Artificial Intelligence and quantum computing in applying it to solve the specific biological problem of protein structure prediction using Quantum Machine Learning algorithms. The CASP dataset from ProteinNet was downloaded which is a standardized data set for machine learning of protein structure. Its large and standardized dataset of PDB entries contains the coordinates of the backbone atoms, corresponding to the sequential chain of N, C_alpha, and C' atoms. This dataset was used to train a quantum-classical hybrid Keras deep neural network model to predict the structure of the proteins. To visually qualify the quality of the predicted versus the actual protein structure, protein contact maps were generated with the experimental and predicted protein structure data and qualified. Therefore this model is recommended for the use of protein structure prediction using AI leveraging the power of quantum computers. The code is provided in the following Github repository https://github.com/bengeof/Protein-structure-prediction-using-AI-and-quantum-computers.


2021 ◽  
Vol 4 (1) ◽  
pp. 14
Author(s):  
Farman Pirzado ◽  
Shahzad Memon ◽  
Lachman Das Dhomeja Dhomeja ◽  
Awais Ahmed

Nowadays, smart devices have become a part of ourlives, hold our data, and are used for sensitive transactions likeinternet banking, mobile banking, etc. Therefore, it is crucial tosecure the data in these smart devices from theft or misplacement.The majority of the devices are secured with password/PINbaseduser authentication methods, which are already proveda less secure or easily guessable user authentication method.An alternative technique for securing smart devices is keystrokedynamics. Keystroke dynamics (KSD) is behavioral biometrics,which uses a natural typing pattern unique in every individualand difficult to fake or replicates that pattern. This paperproposes a user authentication model based on KSD as an additionalsecurity method for increasing the smart devices’ securitylevel. In order to analyze the proposed model, an android-basedapplication has been implemented for collecting data from fakeand genuine users. Six machine learning algorithms have beentested on the collected data set to study their suitability for usein the keystroke dynamics-based authentication model.


Data Science in healthcare is a innovative and capable for industry implementing the data science applications. Data analytics is recent science in to discover the medical data set to explore and discover the disease. It’s a beginning attempt to identify the disease with the help of large amount of medical dataset. Using this data science methodology, it makes the user to find their disease without the help of health care centres. Healthcare and data science are often linked through finances as the industry attempts to reduce its expenses with the help of large amounts of data. Data science and medicine are rapidly developing, and it is important that they advance together. Health care information is very effective in the society. In a human life day to day heart disease had increased. Based on the heart disease to monitor different factors in human body to analyse and prevent the heart disease. To classify the factors using the machine learning algorithms and to predict the disease is major part. Major part of involves machine level based supervised learning algorithm such as SVM, Naviebayes, Decision Trees and Random forest.


IJOSTHE ◽  
2018 ◽  
Vol 5 (6) ◽  
pp. 7
Author(s):  
Apoorva Deshpande ◽  
Ramnaresh Sharma

Anomaly detection system plays an important role in network security. Anomaly detection or intrusion detection model is a predictive model used to predict the network data traffic as normal or intrusion. Machine Learning algorithms are used to build accurate models for clustering, classification and prediction. In this paper classification and predictive models for intrusion detection are built by using machine learning classification algorithms namely Random Forest. These algorithms are tested with KDD-99 data set. In this research work the model for anomaly detection is based on normalized reduced feature and multilevel ensemble classifier. The work is performed in divided into two stages. In the first stage data is normalized using mean normalization. In second stage genetic algorithm is used to reduce number of features and further multilevel ensemble classifier is used for classification of data into different attack groups. From result analysis it is analysed that with reduced feature intrusion can be classified more efficiently.


2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Maad M. Mijwil ◽  
Rana A. Abttan

A decision tree (DTs) is one of the most popular machine learning algorithms that divide data repeatedly to form groups or classes. It is a supervised learning algorithm that can be used on discrete or continuous data for classification or regression. The most traditional classifier in this algorithm is the C4.5 decision tree, which is the point of this research. This classifier has the advantage of building a vast data set and does not stop until it reaches the desired goal. The problem with this classifier is that there are unnecessary nodes and branches leading to overfitting. This overfitting can negatively affect the classification process. In this context, the authors suggest utilizing a genetic algorithm to prune the effect of overfitting. This dataset study consists of four datasets: IRIS, Car Evaluation, GLASS, and WINE collected from UC Irvine (UCI) machine learning repository. The experimental results have confirmed the effectiveness of the genetic algorithm in pruning the effect of overfitting on the four datasets and optimizing confidence factor (CF) of the C4.5 decision tree. The proposed method has reached about 92% accuracy in this work.


Author(s):  
Jayashree M. Kudari

Developments in machine learning techniques for classification and regression exposed the access of detecting sophisticated patterns from various domain-penetrating data. In biomedical applications, enormous amounts of medical data are produced and collected to predict disease type and stage of the disease. Detection and prediction of diseases, such as diabetes, lung cancer, brain cancer, heart disease, and liver diseases, requires huge tests and that increases the size of patient medical data. Robust prediction of a patient's disease from the huge data set is an important agenda in in this chapter. The challenge of applying a machine learning method is to select the best algorithm within the disease prediction framework. This chapter opts for robust machine learning algorithms for various diseases by using case studies. This usually analyzes each dimension of disease, independently checking the identified value between the limits to monitor the condition of the disease.


2019 ◽  
pp. 016555151987764
Author(s):  
Ping Wang ◽  
Xiaodan Li ◽  
Renli Wu

Wikipedia is becoming increasingly critical in helping people obtain information and knowledge. Its leading advantage is that users can not only access information but also modify it. However, this presents a challenging issue: how can we measure the quality of a Wikipedia article? The existing approaches assess Wikipedia quality by statistical models or traditional machine learning algorithms. However, their performance is not satisfactory. Moreover, most existing models fail to extract complete information from articles, which degrades the model’s performance. In this article, we first survey related works and summarise a comprehensive feature framework. Then, state-of-the-art deep learning models are introduced and applied to assess Wikipedia quality. Finally, a comparison among deep learning models and traditional machine learning models is conducted to validate the effectiveness of the proposed model. The models are compared extensively in terms of their training and classification performance. Moreover, the importance of each feature and the importance of different feature sets are analysed separately.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Demeke Endalie ◽  
Getamesay Haile

For decades, machine learning techniques have been used to process Amharic texts. The potential application of deep learning on Amharic document classification has not been exploited due to a lack of language resources. In this paper, we present a deep learning model for Amharic news document classification. The proposed model uses fastText to generate text vectors to represent semantic meaning of texts and solve the problem of traditional methods. The text vectors matrix is then fed into the embedding layer of a convolutional neural network (CNN), which automatically extracts features. We conduct experiments on a data set with six news categories, and our approach produced a classification accuracy of 93.79%. We compared our method to well-known machine learning algorithms such as support vector machine (SVM), multilayer perceptron (MLP), decision tree (DT), XGBoost (XGB), and random forest (RF) and achieved good results.


Landslides can easily be tragic to human life and property. Increase in the rate of human settlement in the mountains has resulted in safety concerns. Landslides have caused economic loss between 1-2% of the GDP in many developing countries. In this study, we discuss a deep learning approach to detect landslides. Convolutional Neural Networks are used for feature extraction for our proposed model. As there was no source of an exact and precise data set for feature extraction, therefore, a new data set was built for testing the model. We have tested and compared this work with our proposed model and with other machine-learning algorithms such as Logistic Regression, Random Forest, AdaBoost, K-Nearest Neighbors and Support Vector Machine. Our proposed deep learning model produces a classification accuracy of 96.90% outperforming the classical machine-learning algorithms.


Sign in / Sign up

Export Citation Format

Share Document