scholarly journals SARM: Salah Activities Recognition Model Based on Smartphone

Electronics ◽  
2019 ◽  
Vol 8 (8) ◽  
pp. 881
Author(s):  
Nafees Ahmad ◽  
Lansheng Han ◽  
Khalid Iqbal ◽  
Rashid Ahmad ◽  
Muhammad Adil Abid ◽  
...  

Alzheimer’s is a chronic neurodegenerative disease that frequently occurs in many people today. It has a major effect on the routine activities of affected people. Previous advancement in smartphone sensors technology enables us to help people suffering from Alzheimer’s. For people in the Muslim community, where it is mandatory to offer prayers five times a day, it may mean that they are struggling in their daily life prayers due to Alzheimer’s or lack of concentration. To deal with such a problem, automated mobile sensor-based activity recognition applications can be supportive to design accurate and precise solutions with an objective to direct the Namazi (worshipper). In this paper, a Salah activities recognition model (SARM) using a mobile sensor is proposed with the aim to recognize specific activities, such as Al-Qayam (standing), Ruku (standing to bowing), and Sujud (standing to prostration). This model entails the collection of data, selection and placement of sensor, data preprocessing, segmentation, feature extraction, and classification. The proposed model will provide a stepping edge to develop an application for observing prayer. For these activities’ recognition, data sets were collected from ten subjects, and six different features sets were used to get improved results. Extensive experiments were performed to test and validate the model features to train random forest (RF), K-nearest neighbor (KNN), naive Bayes (NB), and decision tree (DT). The predicted average accuracy of RF, KNN, NB, and DT was 97%, 94%, 71.6%, and 95% respectively.

2014 ◽  
Vol 571-572 ◽  
pp. 1019-1029
Author(s):  
De Feng Guo ◽  
Bin Liu ◽  
Xiao Tian Jin ◽  
Hong Jian Liu

Activity recognition is a challenging problem for context-aware systems and applications. Many studies in this field has mainly adopted techniques based on supervised or semi-supervised learning algorithms to recognize activities by movement patterns gathered through sensors, but these existing systems suffer from complex issues for feature representations of sensor data and multi-sensors integration. In this paper, we propose a novel feature learning method for activity recognition based on entropy and construct an activity recognition model with multi-class AdaBoost algorithm. Experiments on sensor data from a real dataset demonstrate the significant potential of our method to extract features for activity recognition. The experimental results also show recognition model based on multi-class AdaBoost is effective. The average precision and recall for six activities are 95.9% and 95.9%, respectively, which are higher than results obtained by using other methods such as Support Vector Machine (SVM) or K-Nearest Neighbor (KNN).


Polymers ◽  
2021 ◽  
Vol 13 (21) ◽  
pp. 3811
Author(s):  
Iosif Sorin Fazakas-Anca ◽  
Arina Modrea ◽  
Sorin Vlase

This paper proposes a new method for calculating the monomer reactivity ratios for binary copolymerization based on the terminal model. The original optimization method involves a numerical integration algorithm and an optimization algorithm based on k-nearest neighbour non-parametric regression. The calculation method has been tested on simulated and experimental data sets, at low (<10%), medium (10–35%) and high conversions (>40%), yielding reactivity ratios in a good agreement with the usual methods such as intersection, Fineman–Ross, reverse Fineman–Ross, Kelen–Tüdös, extended Kelen–Tüdös and the error in variable method. The experimental data sets used in this comparative analysis are copolymerization of 2-(N-phthalimido) ethyl acrylate with 1-vinyl-2-pyrolidone for low conversion, copolymerization of isoprene with glycidyl methacrylate for medium conversion and copolymerization of N-isopropylacrylamide with N,N-dimethylacrylamide for high conversion. Also, the possibility to estimate experimental errors from a single experimental data set formed by n experimental data is shown.


2021 ◽  
Vol 87 (6) ◽  
pp. 445-455
Author(s):  
Yi Ma ◽  
Zezhong Zheng ◽  
Yutang Ma ◽  
Mingcang Zhu ◽  
Ran Huang ◽  
...  

Many manifold learning algorithms conduct an eigen vector analysis on a data-similarity matrix with a size of N×N, where N is the number of data points. Thus, the memory complexity of the analysis is no less than O(N2). We pres- ent in this article an incremental manifold learning approach to handle large hyperspectral data sets for land use identification. In our method, the number of dimensions for the high-dimensional hyperspectral-image data set is obtained with the training data set. A local curvature varia- tion algorithm is utilized to sample a subset of data points as landmarks. Then a manifold skeleton is identified based on the landmarks. Our method is validated on three AVIRIS hyperspectral data sets, outperforming the comparison algorithms with a k–nearest-neighbor classifier and achieving the second best performance with support vector machine.


Author(s):  
Wei Yan

In cloud computing environments parallel kNN queries for big data is an important issue. The k nearest neighbor queries (kNN queries), designed to find k nearest neighbors from a dataset S for every object in another dataset R, is a primitive operator widely adopted by many applications including knowledge discovery, data mining, and spatial databases. This chapter proposes a parallel method of kNN queries for big data using MapReduce programming model. Firstly, this chapter proposes an approximate algorithm that is based on mapping multi-dimensional data sets into two-dimensional data sets, and transforming kNN queries into a sequence of two-dimensional point searches. Then, in two-dimensional space this chapter proposes a partitioning method using Voronoi diagram, which incorporates the Voronoi diagram into R-tree. Furthermore, this chapter proposes an efficient algorithm for processing kNN queries based on R-tree using MapReduce programming model. Finally, this chapter presents the results of extensive experimental evaluations which indicate efficiency of the proposed approach.


Web Mining ◽  
2011 ◽  
pp. 253-275
Author(s):  
Xiaodi Huang ◽  
Wei Lai

This chapter presents a new approach to clustering graphs, and applies it to Web graph display and navigation. The proposed approach takes advantage of the linkage patterns of graphs, and utilizes an affinity function in conjunction with the k-nearest neighbor. This chapter uses Web graph clustering as an illustrative example, and offers a potentially more applicable method to mine structural information from data sets, with the hope of informing readers of another aspect of data mining and its applications.


Author(s):  
Amit Saxena ◽  
John Wang

This paper presents a two-phase scheme to select reduced number of features from a dataset using Genetic Algorithm (GA) and testing the classification accuracy (CA) of the dataset with the reduced feature set. In the first phase of the proposed work, an unsupervised approach to select a subset of features is applied. GA is used to select stochastically reduced number of features with Sammon Error as the fitness function. Different subsets of features are obtained. In the second phase, each of the reduced features set is applied to test the CA of the dataset. The CA of a data set is validated using supervised k-nearest neighbor (k-nn) algorithm. The novelty of the proposed scheme is that each reduced feature set obtained in the first phase is investigated for CA using the k-nn classification with different Minkowski metric i.e. non-Euclidean norms instead of conventional Euclidean norm (L2). Final results are presented in the paper with extensive simulations on seven real and one synthetic, data sets. It is revealed from the proposed investigation that taking different norms produces better CA and hence a scope for better feature subset selection.


2019 ◽  
Vol 9 (11) ◽  
pp. 2337 ◽  
Author(s):  
Imran Ashraf ◽  
Soojung Hur ◽  
Yongwan Park

Indoor localization systems are susceptible to higher errors and do not meet the current standards of indoor localization. Moreover, the performance of such approaches is limited by device dependence. The use of Wi-Fi makes the localization process vulnerable to dynamic factors and energy hungry. A multi-sensor fusion based indoor localization approach is proposed to overcome these issues. The proposed approach predicts pedestrians’ current location with smartphone sensors data alone. The proposed approach aims at mitigating the impact of device dependency on the localization accuracy and lowering the localization error in the magnetic field based localization systems. We trained a deep learning based convolutional neural network to recognize the indoor scene which helps to lower the localization error. The recognized scene is used to identify a specific floor and narrow the search space. The database built of magnetic field patterns helps to lower the device dependence. A modified K nearest neighbor (mKNN) is presented to calculate the pedestrian’s current location. The data from pedestrian dead reckoning further refines this location and an extended Kalman filter is implemented to this end. The performance of the proposed approach is tested with experiments on Galaxy S8 and LG G6 smartphones. The experimental results demonstrate that the proposed approach can achieve an accuracy of 1.04 m at 50 percent, regardless of the smartphone used for localization. The proposed mKNN outperforms K nearest neighbor approach, and mean, variance, and maximum errors are lower than those of KNN. Moreover, the proposed approach does not use Wi-Fi for localization and is more energy efficient than those of Wi-Fi based approaches. Experiments reveal that localization without scene recognition leads to higher errors.


2019 ◽  
Vol 12 (2) ◽  
pp. 140
Author(s):  
Retsi Firda Maulina ◽  
Anik Djuraidah ◽  
Anang Kurnia

Poverty is a complex and multidimensional problem so that it becomes a development priority. Applications of poverty modeling in discrete data are still few and applications of the Bayesian paradigm are also still few. The Bayes Method is a parameter estimation method that utilizes initial information (prior) and sample information so that it can provide predictions that have a higher accuracy than the classical methods. Bayes inference using INLA approach provides faster computation than MCMC and possible uses large data sets. This study aims to model Javanese poverty using the Bayesian Spatial Probit with the INLA approach with three weighting matrices, namely K-Nearest Neighbor (KNN), Inverse Distance, and Exponential Distance. Furthermore, the result showed poverty analysis in Java based on the best model is using Bayesian SAR Probit INLA with KNN weighting matrix produced the highest level of classification accuracy, with specificity is 85.45%, sensitivity is 93.75%, and accuracy is 89.92%.


IEEE Access ◽  
2015 ◽  
Vol 3 ◽  
pp. 942-954 ◽  
Author(s):  
Yuka Komai ◽  
Yuya Sasaki ◽  
Takahiro Hara ◽  
Shojiro Nishio

Author(s):  
Igor Loboda

Diagnostics is an important aspect of a condition based maintenance program. To develop an effective gas turbine monitoring system in short time, the recommendations on how to optimally design every system algorithm are required. This paper deals with choosing a proper fault classification technique for gas turbine monitoring systems. To classify gas path faults, different artificial neural networks are typically employed. Among them the Multilayer Perceptron (MLP) is the mostly used. Some comparative studies referred to in the introduction show that the MLP and some other techniques yield practically the same classification accuracy on average for all faults. That is why in addition to the average accuracy, more criteria to choose the best technique are required. Since techniques like Probabilistic Neural Network (PNN), Parzen Window (PW) and k-Nearest Neighbor (K-NN) provide a confidence probability for every diagnostic decision, the presence of this important property can be such a criterion. The confidence probability in these techniques is computed through estimating a probability density for patterns of each concerned fault class. The present study compares all mentioned techniques and their variations using as criteria both the average accuracy and availability of the confidence probability. To compute them for each technique, a special testing procedure simulates numerous diagnosis cycles corresponding to different fault classes and fault severities. In addition to the criteria themselves, criteria imprecision due to a finite number of the diagnosis cycles is computed and involved into selecting the best technique.


Sign in / Sign up

Export Citation Format

Share Document