scholarly journals Machine learning algorithms for fall detection using kinematic and heart rate parameters-a comprehensive analysis

Author(s):  
Anita Ramachandran ◽  
Adarsh Ramesh ◽  
Aditya Sukhlecha ◽  
Avtansh Pandey ◽  
Anupama Karuppiah

The application of machine learning techniques to detect and classify falls is a prominent area of research in the domain of intelligent assisted living systems. Machine learning (ML) based solutions for fall detection systems built on wearable devices use various sources of information such inertial motion units (IMU), vital signs, acoustic or channel state information parameters. Most existing research rely on only one of these sources; however, a need to do more experimenation to observe the efficiency of the ML classifiers while coupling features from diverse sources, was felt. In addition, fall detection systems based on wearable devices, require intelligent feature engineering and selection for dimensionality reduction, so as to reduce the computational complexity of the devices. In this paper we do a comprehensive performance analysis of ML classifiers for fall detection, on a dataset we collected. The analysis includes the impact of the following aspects on the performance of ML classifiers for fall detection: (i) using a combination of features from 2 sensors-an IMU sensor and a heart rate sensor, (ii) feature engineering and feature selection based on statistical methods, and (iii) using ensemble techniques for fall detection. We find that the inclusion of heart rate along with IMU sensor parameters improves the accuracy of fall detection. The conclusions from our experimentations on feature selection and ensemble analysis can serve as inputs for researchers designing wearable device-based fall detection systems.

Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1831
Author(s):  
Armando Collado-Villaverde ◽  
Mario Cobos ◽  
Pablo Muñoz ◽  
David F. Barrero

People’s life expectancy is increasing, resulting in a growing elderly population. That population is subject to dependency issues, falls being a problematic one due to the associated health complications. Some projects are trying to enhance the independence of elderly people by monitoring their status, typically by means of wearable devices. These devices often feature Machine Learning (ML) algorithms for fall detection using accelerometers. However, the software deployed often lacks reliable data for the models’ training. To overcome such an issue, we have developed a publicly available fall simulator capable of recreating accelerometer fall samples of two of the most common types of falls: syncope and forward. Those simulated samples are like real falls recorded using real accelerometers in order to use them later as input for ML applications. To validate our approach, we have used different classifiers over both simulated falls and data from two public datasets based on real data. Our tests show that the fall simulator achieves a high accuracy for generating accelerometer data from a fall, allowing to create larger datasets for training fall detection software in wearable devices.


Author(s):  
A. B Yusuf ◽  
R. M Dima ◽  
S. K Aina

Breast cancer is the second most commonly diagnosed cancer in women throughout the world. It is on the rise, especially in developing countries, where the majority of cases are discovered late. Breast cancer develops when cancerous tumors form on the surface of the breast cells. The absence of accurate prognostic models to assist physicians recognize symptoms early makes it difficult to develop a treatment plan that would help patients live longer. However, machine learning techniques have recently been used to improve the accuracy and speed of breast cancer diagnosis. If the accuracy is flawless, the model will be more efficient, and the solution to breast cancer diagnosis will be better. Nevertheless, the primary difficulty for systems developed to detect breast cancer using machine-learning models is attaining the greatest classification accuracy and picking the most predictive feature useful for increasing accuracy. As a result, breast cancer prognosis remains a difficulty in today's society. This research seeks to address a flaw in an existing technique that is unable to enhance classification of continuous-valued data, particularly its accuracy and the selection of optimal features for breast cancer prediction. In order to address these issues, this study examines the impact of outliers and feature reduction on the Wisconsin Diagnostic Breast Cancer Dataset, which was tested using seven different machine learning algorithms. The results show that Logistic Regression, Random Forest, and Adaboost classifiers achieved the greatest accuracy of 99.12%, on removal of outliers from the dataset. Also, this filtered dataset with feature selection, on the other hand, has the greatest accuracy of 100% and 99.12% with Random Forest and Gradient boost classifiers, respectively. When compared to other state-of-the-art approaches, the two suggested strategies outperformed the unfiltered data in terms of accuracy. The suggested architecture might be a useful tool for radiologists to reduce the number of false negatives and positives. As a result, the efficiency of breast cancer diagnosis analysis will be increased.


Author(s):  
Yash Nadkarni ◽  
Siddhesh Deo ◽  
Aditya Patwardhan ◽  
Amey Ponkshe

The traditional way to calculate fuel economy is done by using odometer reading and fuel consumed by car to travel that particular distance. This is a very narrow approach as fuel economy is affected by a variety of factors in the real world. Features such as throttle response, engine temperature, coolant temperature, gross weight of vehicle, etc. have a huge influence on the fuel economy. In order to overcome this problem, we have tried to predict fuel economy based on various features extracted from telemetric data in our project. In order to achieve this, we have implemented various feature selection and feature extraction techniques by further analyzing them with the purpose of calculating the effectiveness of those features to achieve high performance of machine learning algorithms that ultimately improves the predictive accuracy of the classifier. This provides us with the information regarding the amount of influence a particular feature has on the overall fuel economy of the vehicle.


Electronics ◽  
2021 ◽  
Vol 10 (17) ◽  
pp. 2099
Author(s):  
Paweł Ziemba ◽  
Jarosław Becker ◽  
Aneta Becker ◽  
Aleksandra Radomska-Zalas ◽  
Mateusz Pawluk ◽  
...  

One of the important research problems in the context of financial institutions is the assessment of credit risk and the decision to whether grant or refuse a loan. Recently, machine learning based methods are increasingly employed to solve such problems. However, the selection of appropriate feature selection technique, sampling mechanism, and/or classifiers for credit decision support is very challenging, and can affect the quality of the loan recommendations. To address this challenging task, this article examines the effectiveness of various data science techniques in issue of credit decision support. In particular, processing pipeline was designed, which consists of methods for data resampling, feature discretization, feature selection, and binary classification. We suggest building appropriate decision models leveraging pertinent methods for binary classification, feature selection, as well as data resampling and feature discretization. The selected models’ feasibility analysis was performed through rigorous experiments on real data describing the client’s ability for loan repayment. During experiments, we analyzed the impact of feature selection on the results of binary classification, and the impact of data resampling with feature discretization on the results of feature selection and binary classification. After experimental evaluation, we found that correlation-based feature selection technique and random forest classifier yield the superior performance in solving underlying problem.


Electronics ◽  
2020 ◽  
Vol 9 (7) ◽  
pp. 1167
Author(s):  
Tamer Aldwairi ◽  
Dilina Perera ◽  
Mark A. Novotny

The amassed growth in the size of data, caused by the advancement of technologies and the use of internet of things to collect and transmit data, resulted in the creation of large volumes of data and an increasing variety of data types that need to be processed at very high speeds so that we can extract meaningful information from these massive volumes of unstructured data. The process of mining this data is very challenging since a lot of the data suffers from the problem of high dimensionality. The quandary of high dimensionality represents a great challenge that can be controlled through the process of feature selection. Feature selection is a complex task with multiple layers of difficulty. To be able to grasp and realize the impediments associated with high dimensional data a more and in-depth understanding of feature selection is required. In this study, we examine the effect of appropriate feature selection during the classification process of anomaly network intrusion detection systems. We test its effect on the performance of Restricted Boltzmann Machines and compare its performance to conventional machine learning algorithms. We establish that when certain features that are representative of the model are to be selected the change in the accuracy was always less than 3% across all algorithms. This verifies that the accurate selection of the important features when building a model can have a significant impact on the accuracy level of the classifiers. We also confirmed in this study that the performance of the Restricted Boltzmann Machines can outperform or at least is comparable to other well-known machine learning algorithms. Extracting those important features can be very useful when trying to build a model with datasets with a lot of features.


2020 ◽  
Vol 39 (5) ◽  
pp. 6579-6590
Author(s):  
Sandy Çağlıyor ◽  
Başar Öztayşi ◽  
Selime Sezgin

The motion picture industry is one of the largest industries worldwide and has significant importance in the global economy. Considering the high stakes and high risks in the industry, forecast models and decision support systems are gaining importance. Several attempts have been made to estimate the theatrical performance of a movie before or at the early stages of its release. Nevertheless, these models are mostly used for predicting domestic performances and the industry still struggles to predict box office performances in overseas markets. In this study, the aim is to design a forecast model using different machine learning algorithms to estimate the theatrical success of US movies in Turkey. From various sources, a dataset of 1559 movies is constructed. Firstly, independent variables are grouped as pre-release, distributor type, and international distribution based on their characteristic. The number of attendances is discretized into three classes. Four popular machine learning algorithms, artificial neural networks, decision tree regression and gradient boosting tree and random forest are employed, and the impact of each group is observed by compared by the performance models. Then the number of target classes is increased into five and eight and results are compared with the previously developed models in the literature.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4821
Author(s):  
Rami Ahmad ◽  
Raniyah Wazirali ◽  
Qusay Bsoul ◽  
Tarik Abu-Ain ◽  
Waleed Abu-Ain

Wireless Sensor Networks (WSNs) continue to face two major challenges: energy and security. As a consequence, one of the WSN-related security tasks is to protect them from Denial of Service (DoS) and Distributed DoS (DDoS) attacks. Machine learning-based systems are the only viable option for these types of attacks, as traditional packet deep scan systems depend on open field inspection in transport layer security packets and the open field encryption trend. Moreover, network data traffic will become more complex due to increases in the amount of data transmitted between WSN nodes as a result of increasing usage in the future. Therefore, there is a need to use feature selection techniques with machine learning in order to determine which data in the DoS detection process are most important. This paper examined techniques for improving DoS anomalies detection along with power reservation in WSNs to balance them. A new clustering technique was introduced, called the CH_Rotations algorithm, to improve anomaly detection efficiency over a WSN’s lifetime. Furthermore, the use of feature selection techniques with machine learning algorithms in examining WSN node traffic and the effect of these techniques on the lifetime of WSNs was evaluated. The evaluation results showed that the Water Cycle (WC) feature selection displayed the best average performance accuracy of 2%, 5%, 3%, and 3% greater than Particle Swarm Optimization (PSO), Simulated Annealing (SA), Harmony Search (HS), and Genetic Algorithm (GA), respectively. Moreover, the WC with Decision Tree (DT) classifier showed 100% accuracy with only one feature. In addition, the CH_Rotations algorithm improved network lifetime by 30% compared to the standard LEACH protocol. Network lifetime using the WC + DT technique was reduced by 5% compared to other WC + DT-free scenarios.


Sign in / Sign up

Export Citation Format

Share Document