scholarly journals The Performance of Post-Fall Detection Using the Cross-Dataset: Feature Vectors, Classifiers and Processing Conditions

Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4638
Author(s):  
Bummo Koo ◽  
Jongman Kim ◽  
Yejin Nam ◽  
Youngho Kim

In this study, algorithms to detect post-falls were evaluated using the cross-dataset according to feature vectors (time-series and discrete data), classifiers (ANN and SVM), and four different processing conditions (normalization, equalization, increase in the number of training data, and additional training with external data). Three-axis acceleration and angular velocity data were obtained from 30 healthy male subjects by attaching an IMU to the middle of the left and right anterior superior iliac spines (ASIS). Internal and external tests were performed using our lab dataset and SisFall public dataset, respectively. The results showed that ANN and SVM were suitable for the time-series and discrete data, respectively. The classification performance generally decreased, and thus, specific feature vectors from the raw data were necessary when untrained motions were tested using a public dataset. Normalization made SVM and ANN more and less effective, respectively. Equalization increased the sensitivity, even though it did not improve the overall performance. The increase in the number of training data also improved the classification performance. Machine learning was vulnerable to untrained motions, and data of various movements were needed for the training.

Author(s):  
M. Voelsen ◽  
D. Lobo Torres ◽  
R. Q. Feitosa ◽  
F. Rottensteiner ◽  
C. Heipke

Abstract. Fully convolutional neural networks (FCN) are successfully used for pixel-wise land cover classification - the task of identifying the physical material of the Earth’s surface for every pixel in an image. The acquisition of large training datasets is challenging, especially in remote sensing, but necessary for a FCN to perform well. One way to circumvent manual labelling is the usage of existing databases, which usually contain a certain amount of label noise when combined with another data source. As a first part of this work, we investigate the impact of training data on a FCN. We experiment with different amounts of training data, varying w.r.t. the covered area, the available acquisition dates and the amount of label noise. We conclude that the more data is used for training, the better is the generalization performance of the model, and the FCN is able to mitigate the effect of label noise to a high degree. Another challenge is the imbalanced class distribution in most real-world datasets, which can cause the classifier to focus on the majority classes, leading to poor classification performance for minority classes. To tackle this problem, in this paper, we use the cosine similarity loss to force feature vectors of the same class to be close to each other in feature space. Our experiments show that the cosine loss helps to obtain more similar feature vectors, but the similarity of the cluster centers also increases.


Entropy ◽  
2019 ◽  
Vol 21 (8) ◽  
pp. 721 ◽  
Author(s):  
YuGuang Long ◽  
LiMin Wang ◽  
MingHui Sun

Due to the simplicity and competitive classification performance of the naive Bayes (NB), researchers have proposed many approaches to improve NB by weakening its attribute independence assumption. Through the theoretical analysis of Kullback–Leibler divergence, the difference between NB and its variations lies in different orders of conditional mutual information represented by these augmenting edges in the tree-shaped network structure. In this paper, we propose to relax the independence assumption by further generalizing tree-augmented naive Bayes (TAN) from 1-dependence Bayesian network classifiers (BNC) to arbitrary k-dependence. Sub-models of TAN that are built to respectively represent specific conditional dependence relationships may “best match” the conditional probability distribution over the training data. Extensive experimental results reveal that the proposed algorithm achieves bias-variance trade-off and substantially better generalization performance than state-of-the-art classifiers such as logistic regression.


Author(s):  
Weida Zhong ◽  
Qiuling Suo ◽  
Abhishek Gupta ◽  
Xiaowei Jia ◽  
Chunming Qiao ◽  
...  

With the popularity of smartphones, large-scale road sensing data is being collected to perform traffic prediction, which is an important task in modern society. Due to the nature of the roving sensors on smartphones, the collected traffic data which is in the form of multivariate time series, is often temporally sparse and unevenly distributed across regions. Moreover, different regions can have different traffic patterns, which makes it challenging to adapt models learned from regions with sufficient training data to target regions. Given that many regions may have very sparse data, it is also impossible to build individual models for each region separately. In this paper, we propose a meta-learning based framework named MetaTP to overcome these challenges. MetaTP has two key parts, i.e., basic traffic prediction network (base model) and meta-knowledge transfer. In base model, a two-layer interpolation network is employed to map original time series onto uniformly-spaced reference time points, so that temporal prediction can be effectively performed in the reference space. The meta-learning framework is employed to transfer knowledge from source regions with a large amount of data to target regions with a few data examples via fast adaptation, in order to improve model generalizability on target regions. Moreover, we use two memory networks to capture the global patterns of spatial and temporal information across regions. We evaluate the proposed framework on two real-world datasets, and experimental results show the effectiveness of the proposed framework.


2013 ◽  
Vol 427-429 ◽  
pp. 2309-2312
Author(s):  
Hai Bin Mei ◽  
Ming Hua Zhang

Alert classifiers built with the supervised classification technique require large amounts of labeled training alerts. Preparing for such training data is very difficult and expensive. Thus accuracy and feasibility of current classifiers are greatly restricted. This paper employs semi-supervised learning to build alert classification model to reduce the number of needed labeled training alerts. Alert context properties are also introduced to improve the classification performance. Experiments have demonstrated the accuracy and feasibility of our approach.


Author(s):  
Jamil Baz ◽  
Nicolas M Granger ◽  
Campbell R. Harvey ◽  
Nicolas Le Roux ◽  
Sandy Rattray

Author(s):  
Сергій Миколайович Лисенко

The dynamic expansion of cyber threats poses an urgent need for the development of new methods, methods, and systems for their detection. The subject of the study is the process of ensuring the resilience of computer systems in the presence of cyber threats. The goal is to develop a self-adaptive method for computer systems resilience in the presence of cyberattacks. Results. The article presents a self-adaptive system to ensure the resilience of corporate networks in the presence of botnets’ cyberattacks. Resilience is provided by adaptive network reconfiguration. It is carried out using security scenarios selected based on a cluster analysis of the collected network features inherent cyberattacks. To select the necessary security scenarios, the proposed method uses fuzzy semi-supervised c-means clustering. To detect host-type cyberattacks, information about the hosts’ network activity and reports of host antiviruses are collected. To detect the network type attacks, the monitoring of network activity is carried out, which may indicate the appearance of a cyberattack. According to gathered in the network information concerning possible attacks performed by botnet the measures for the resilient functioning of the network are assumed. To choose the needed scenario for network reconfiguration, the clustering is performed. The result of the clustering is the scenario with the list of the requirement for the reconfiguration of the network parameters, which will assure the network’s resilience in the situation of the botnet’s attacks. As the mean of the security scenario choice, the semi-supervised fuzzy c-means clustering was used. The clustering is performed based on labeled training data. The objects of the clustering are the feature vectors, obtained from a payload of the inbound and outbound traffic and reports of the antiviral tool about possible hosts’ infection. The result of clustering is a degree of membership of the feature vectors to one of the clusters. The membership of feature vector to cluster gives an answer to question what scenario of the network reconfiguration is to be applied in the situation of the botnet’s attack. The system contains the clusters that indicate the normal behavior of the network. The purpose of the method is to select security scenarios following cyberattacks carried out by botnets to mitigate the consequences of attacks and ensure a network functioning resilience. Conclusions. The self-adaptive method for computer systems resilience in the presence of cyberattacks has been developed. Based on the proposed method, a self-adaptive attack detection, and mitigation system has been developed. It demonstrates the ability to ensure the resilient functioning of the network in the presence of botnet cyberattacks at 70 %.


Author(s):  
H. Miyazaki ◽  
M. Nagai ◽  
R. Shibasaki

Methodology of automated human settlement mapping is highly needed for utilization of historical satellite data archives for urgent issues of urban growth in global scale, such as disaster risk management, public health, food security, and urban management. As development of global data with spatial resolution of 10-100 m was achieved by some initiatives using ASTER, Landsat, and TerraSAR-X, next goal has targeted to development of time-series data which can contribute to studies urban development with background context of socioeconomy, disaster risk management, public health, transport and other development issues. We developed an automated algorithm to detect human settlement by classification of built-up and non-built-up in time-series Landsat images. A machine learning algorithm, Local and Global Consistency (LLGC), was applied with improvements for remote sensing data. The algorithm enables to use MCD12Q1, a MODIS-based global land cover map with 500-m resolution, as training data so that any manual process is not required for preparation of training data. In addition, we designed the method to composite multiple results of LLGC into a single output to reduce uncertainty. The LLGC results has a confidence value ranging 0.0 to 1.0 representing probability of built-up and non-built-up. The median value of the confidence for a certain period around a target time was expected to be a robust output of confidence to identify built-up or non-built-up areas against uncertainties in satellite data quality, such as cloud and haze contamination. Four scenes of Landsat data for each target years, 1990, 2000, 2005, and 2010, were chosen among the Landsat archive data with cloud contamination less than 20%.We developed a system with the algorithms on the Data Integration and Analysis System (DIAS) in the University of Tokyo and processed 5200 scenes of Landsat data for cities with more than one million people worldwide.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7417
Author(s):  
Alex J. Hope ◽  
Utkarsh Vashisth ◽  
Matthew J. Parker ◽  
Andreas B. Ralston ◽  
Joshua M. Roper ◽  
...  

Concussion injuries remain a significant public health challenge. A significant unmet clinical need remains for tools that allow related physiological impairments and longer-term health risks to be identified earlier, better quantified, and more easily monitored over time. We address this challenge by combining a head-mounted wearable inertial motion unit (IMU)-based physiological vibration acceleration (“phybrata”) sensor and several candidate machine learning (ML) models. The performance of this solution is assessed for both binary classification of concussion patients and multiclass predictions of specific concussion-related neurophysiological impairments. Results are compared with previously reported approaches to ML-based concussion diagnostics. Using phybrata data from a previously reported concussion study population, four different machine learning models (Support Vector Machine, Random Forest Classifier, Extreme Gradient Boost, and Convolutional Neural Network) are first investigated for binary classification of the test population as healthy vs. concussion (Use Case 1). Results are compared for two different data preprocessing pipelines, Time-Series Averaging (TSA) and Non-Time-Series Feature Extraction (NTS). Next, the three best-performing NTS models are compared in terms of their multiclass prediction performance for specific concussion-related impairments: vestibular, neurological, both (Use Case 2). For Use Case 1, the NTS model approach outperformed the TSA approach, with the two best algorithms achieving an F1 score of 0.94. For Use Case 2, the NTS Random Forest model achieved the best performance in the testing set, with an F1 score of 0.90, and identified a wider range of relevant phybrata signal features that contributed to impairment classification compared with manual feature inspection and statistical data analysis. The overall classification performance achieved in the present work exceeds previously reported approaches to ML-based concussion diagnostics using other data sources and ML models. This study also demonstrates the first combination of a wearable IMU-based sensor and ML model that enables both binary classification of concussion patients and multiclass predictions of specific concussion-related neurophysiological impairments.


Author(s):  
H. Miyazaki ◽  
M. Nagai ◽  
R. Shibasaki

Methodology of automated human settlement mapping is highly needed for utilization of historical satellite data archives for urgent issues of urban growth in global scale, such as disaster risk management, public health, food security, and urban management. As development of global data with spatial resolution of 10-100 m was achieved by some initiatives using ASTER, Landsat, and TerraSAR-X, next goal has targeted to development of time-series data which can contribute to studies urban development with background context of socioeconomy, disaster risk management, public health, transport and other development issues. We developed an automated algorithm to detect human settlement by classification of built-up and non-built-up in time-series Landsat images. A machine learning algorithm, Local and Global Consistency (LLGC), was applied with improvements for remote sensing data. The algorithm enables to use MCD12Q1, a MODIS-based global land cover map with 500-m resolution, as training data so that any manual process is not required for preparation of training data. In addition, we designed the method to composite multiple results of LLGC into a single output to reduce uncertainty. The LLGC results has a confidence value ranging 0.0 to 1.0 representing probability of built-up and non-built-up. The median value of the confidence for a certain period around a target time was expected to be a robust output of confidence to identify built-up or non-built-up areas against uncertainties in satellite data quality, such as cloud and haze contamination. Four scenes of Landsat data for each target years, 1990, 2000, 2005, and 2010, were chosen among the Landsat archive data with cloud contamination less than 20%.We developed a system with the algorithms on the Data Integration and Analysis System (DIAS) in the University of Tokyo and processed 5200 scenes of Landsat data for cities with more than one million people worldwide.


Region Direct ◽  
2014 ◽  
Vol 7 (1) ◽  
pp. 77-104
Author(s):  
Martin Alexy ◽  
Marek Káčer

Abstract In this paper we study creative capacity of economies of Visegrad Four countries in the period 2000-2011. Creativity index is constructed based on the 3Ts concept of talent, technology and tolerance being the key components of the creativity. Creativity index is measured and calculated with both the cross-section and the time series dimensions. The paper provides index as an open source with the description of variables and their respective weights. Comparison of the creative capacity of economies is based on the empirical results of the Creativity index and its components. Czech Republic is the first and Hungary is the second in the ranking continuously during the examined period. Talent and technology areas are the main reasons for differences between the two leading countries and the rest.


Sign in / Sign up

Export Citation Format

Share Document