scholarly journals Improving the Authentication with Built-In Camera Protocol Using Built-In Motion Sensors: A Deep Learning Solution

Mathematics ◽  
2021 ◽  
Vol 9 (15) ◽  
pp. 1786
Author(s):  
Cezara Benegui ◽  
Radu Tudor Ionescu

In this paper, we propose an enhanced version of the Authentication with Built-in Camera (ABC) protocol by employing a deep learning solution based on built-in motion sensors. The standard ABC protocol identifies mobile devices based on the photo-response non-uniformity (PRNU) of the camera sensor, while also considering QR-code-based meta-information. During registration, users are required to capture photos using their smartphone camera. The photos are sent to a server that computes the camera fingerprint, storing it as an authentication trait. During authentication, the user is required to take two photos that contain two QR codes presented on a screen. The presented QR code images also contain a unique probe signal, similar to a camera fingerprint, generated by the protocol. During verification, the server computes the fingerprint of the received photos and authenticates the user if (i) the probe signal is present, (ii) the metadata embedded in the QR codes is correct and (iii) the camera fingerprint is identified correctly. However, the protocol is vulnerable to forgery attacks when the attacker can compute the camera fingerprint from external photos, as shown in our preliminary work. Hence, attackers can easily remove their PRNU from the authentication photos without completely altering the probe signal, resulting in attacks that bypass the defense systems of the ABC protocol. In this context, we propose an enhancement to the ABC protocol, using motion sensor data as an additional and passive authentication layer. Smartphones can be identified through their motion sensor data, which, unlike photos, is never posted by users on social media platforms, thus being more secure than using photographs alone. To this end, we transform motion signals into embedding vectors produced by deep neural networks, applying Support Vector Machines for the smartphone identification task. Our change to the ABC protocol results in a multi-modal protocol that lowers the false acceptance rate for the attack proposed in our previous work to a percentage as low as 0.07%. In this paper, we present the attack that makes ABC vulnerable, as well as our multi-modal ABC protocol along with relevant experiments and results.

MIS Quarterly ◽  
2021 ◽  
Vol 45 (2) ◽  
pp. 859-896
Author(s):  
Hongyi Zhu ◽  
Sagar Samtani ◽  
Randall Brown ◽  
Hsinchun Chen

Ensuring the health and safety of senior citizens who live alone is a growing societal concern. The Activity of Daily Living (ADL) approach is a common means to monitor disease progression and the ability of these individuals to care for themselves. However, the prevailing sensor-based ADL monitoring systems primarily rely on wearable motion sensors, capture insufficient information for accurate ADL recognition, and do not provide a comprehensive understanding of ADLs at different granularities. Current healthcare IS and mobile analytics research focuses on studying the system, device, and provided services, and is in need of an end-to-end solution to comprehensively recognize ADLs based on mobile sensor data. This study adopts the design science paradigm and employs advanced deep learning algorithms to develop a novel hierarchical, multiphase ADL recognition framework to model ADLs at different granularities. We propose a novel 2D interaction kernel for convolutional neural networks to leverage interactions between human and object motion sensors. We rigorously evaluate each proposed module and the entire framework against state-of-the-art benchmarks (e.g., support vector machines, DeepConvLSTM, hidden Markov models, and topic-modeling-based ADLR) on two real-life motion sensor datasets that consist of ADLs at varying granularities: Opportunity and INTER. Results and a case study demonstrate that our framework can recognize ADLs at different levels more accurately. We discuss how stakeholders can further benefit from our proposed framework. Beyond demonstrating practical utility, we discuss contributions to the IS knowledge base for future design science-based cybersecurity, healthcare, and mobile analytics applications.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Andrew P. Creagh ◽  
Florian Lipsmeier ◽  
Michael Lindemann ◽  
Maarten De Vos

AbstractThe emergence of digital technologies such as smartphones in healthcare applications have demonstrated the possibility of developing rich, continuous, and objective measures of multiple sclerosis (MS) disability that can be administered remotely and out-of-clinic. Deep Convolutional Neural Networks (DCNN) may capture a richer representation of healthy and MS-related ambulatory characteristics from the raw smartphone-based inertial sensor data than standard feature-based methodologies. To overcome the typical limitations associated with remotely generated health data, such as low subject numbers, sparsity, and heterogeneous data, a transfer learning (TL) model from similar large open-source datasets was proposed. Our TL framework leveraged the ambulatory information learned on human activity recognition (HAR) tasks collected from wearable smartphone sensor data. It was demonstrated that fine-tuning TL DCNN HAR models towards MS disease recognition tasks outperformed previous Support Vector Machine (SVM) feature-based methods, as well as DCNN models trained end-to-end, by upwards of 8–15%. A lack of transparency of “black-box” deep networks remains one of the largest stumbling blocks to the wider acceptance of deep learning for clinical applications. Ensuing work therefore aimed to visualise DCNN decisions attributed by relevance heatmaps using Layer-Wise Relevance Propagation (LRP). Through the LRP framework, the patterns captured from smartphone-based inertial sensor data that were reflective of those who are healthy versus people with MS (PwMS) could begin to be established and understood. Interpretations suggested that cadence-based measures, gait speed, and ambulation-related signal perturbations were distinct characteristics that distinguished MS disability from healthy participants. Robust and interpretable outcomes, generated from high-frequency out-of-clinic assessments, could greatly augment the current in-clinic assessment picture for PwMS, to inform better disease management techniques, and enable the development of better therapeutic interventions.


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 546 ◽  
Author(s):  
Haibin Yu ◽  
Guoxiong Pan ◽  
Mian Pan ◽  
Chong Li ◽  
Wenyan Jia ◽  
...  

Recently, egocentric activity recognition has attracted considerable attention in the pattern recognition and artificial intelligence communities because of its wide applicability in medical care, smart homes, and security monitoring. In this study, we developed and implemented a deep-learning-based hierarchical fusion framework for the recognition of egocentric activities of daily living (ADLs) in a wearable hybrid sensor system comprising motion sensors and cameras. Long short-term memory (LSTM) and a convolutional neural network are used to perform egocentric ADL recognition based on motion sensor data and photo streaming in different layers, respectively. The motion sensor data are used solely for activity classification according to motion state, while the photo stream is used for further specific activity recognition in the motion state groups. Thus, both motion sensor data and photo stream work in their most suitable classification mode to significantly reduce the negative influence of sensor differences on the fusion results. Experimental results show that the proposed method not only is more accurate than the existing direct fusion method (by up to 6%) but also avoids the time-consuming computation of optical flow in the existing method, which makes the proposed algorithm less complex and more suitable for practical application.


Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3876 ◽  
Author(s):  
Tiantian Zhu ◽  
Zhengqiu Weng ◽  
Guolang Chen ◽  
Lei Fu

With the popularity of smartphones and the development of hardware, mobile devices are widely used by people. To ensure availability and security, how to protect private data in mobile devices without disturbing users has become a key issue. Mobile user authentication methods based on motion sensors have been proposed by many works, but the existing methods have a series of problems such as poor de-noising ability, insufficient availability, and low coverage of feature extraction. Based on the shortcomings of existing methods, this paper proposes a hybrid deep learning system for complex real-world mobile authentication. The system includes: (1) a variational mode decomposition (VMD) based de-noising method to enhance the singular value of sensors, such as discontinuities and mutations, and increase the extraction range of the feature; (2) semi-supervised collaborative training (Tri-Training) methods to effectively deal with mislabeling problems in complex real-world situations; and (3) a combined convolutional neural network (CNN) and support vector machine (SVM) model for effective hybrid feature extraction and training. The training results under large-scale, real-world data show that the proposed system can achieve 95.01% authentication accuracy, and the effect is better than the existing frontier methods.


2020 ◽  
Author(s):  
Cezara Benegui ◽  
Radu Tudor Ionescu

<div>[Paper accepted at ACNS 2020]</div><div>In this paper, we propose a simple and effective attack on the recently introduced Smartphone Authentication with Built-in Camera Protocol, called ABC. The ABC protocol uses the photo-response non-uniformity (PRNU) as the main authentication factor in combination with anti-forgery detection systems. The ABC protocol interprets the PRNU as a fingerprint of the camera sensor built-in a smartphone device. The protocol works as follows: during the authentication process, the user is challenged with two QR codes (sent by the server) that need to be photographed with a pre-registered device. In each QR code, the server embeds a unique pattern noise (not visible to the naked eye), called probe signal, that is used to identify potential forgeries. The inserted probe signal is very similar to a genuine fingerprint. The photos of QR codes taken by the user are then sent to the server for verification. The server checks (i) if the photos contain the user's camera fingerprint (used to authenticate the pre-registered device) and (ii) if the photos contain the embedded probe signal. If an adversary tries to remove (subtract) his own camera fingerprint and replace it with the victim's camera fingerprint (computed from photos shared on social media), then he will implicitly remove the embedded probe signal and the attack will fail. The ABC protocol is able to detect these attacks with a false acceptance rate (FAR) of 0.5%. However, the ABC protocol wrongly assumes that the attacker can only determine his own camera fingerprint from the photos of the presented QR codes. The attack proposed in our work is able to get past the anti-forgery detection system with a FAR of 54.1%, simply by estimating the attacker's camera fingerprint from a different set of photos (e.g. five photos) owned by the attacker. This set of photos can be trivially obtained before the attack, allowing the adversary to compute his camera fingerprint independently of the attack. The key to the success of our attack is that the independently computed adversary's camera fingerprint does not contain the probe signal embedded in the QR codes. Therefore, when we subtract the adversary's camera fingerprint and add the victim's camera fingerprint, the embedded probe signal will remain in place. For this reason, the proposed attack can successfully pass through the anti-forgery detection system of the ABC protocol. In this paper, we also propose a potential fix based on analyzing signals from built-in motion sensors, which are not typically shared on social media.</div>


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3363 ◽  
Author(s):  
Taylor Mauldin ◽  
Marc Canby ◽  
Vangelis Metsis ◽  
Anne Ngu ◽  
Coralys Rivera

This paper presents SmartFall, an Android app that uses accelerometer data collected from a commodity-based smartwatch Internet of Things (IoT) device to detect falls. The smartwatch is paired with a smartphone that runs the SmartFall application, which performs the computation necessary for the prediction of falls in real time without incurring latency in communicating with a cloud server, while also preserving data privacy. We experimented with both traditional (Support Vector Machine and Naive Bayes) and non-traditional (Deep Learning) machine learning algorithms for the creation of fall detection models using three different fall datasets (Smartwatch, Notch, Farseeing). Our results show that a Deep Learning model for fall detection generally outperforms more traditional models across the three datasets. This is attributed to the Deep Learning model’s ability to automatically learn subtle features from the raw accelerometer data that are not available to Naive Bayes and Support Vector Machine, which are restricted to learning from a small set of extracted features manually specified. Furthermore, the Deep Learning model exhibits a better ability to generalize to new users when predicting falls, an important quality of any model that is to be successful in the real world. We also present a three-layer open IoT system architecture used in SmartFall, which can be easily adapted for the collection and analysis of other sensor data modalities (e.g., heart rate, skin temperature, walking patterns) that enables remote monitoring of a subject’s wellbeing.


Symmetry ◽  
2020 ◽  
Vol 12 (9) ◽  
pp. 1570
Author(s):  
Sakorn Mekruksavanich ◽  
Anuchit Jitpattanakul ◽  
Phichai Youplao ◽  
Preecha Yupapin

The creation of the Internet of Things (IoT), along with the latest developments in wearable technology, has provided new opportunities in human activity recognition (HAR). The modern smartwatch offers the potential for data from sensors to be relayed to novel IoT platforms, which allow the constant tracking and monitoring of human movement and behavior. Recently, traditional activity recognition techniques have done research in advance by choosing machine learning methods such as artificial neural network, decision tree, support vector machine, and naive Bayes. Nonetheless, these conventional machine learning techniques depend inevitably on heuristically handcrafted feature extraction, in which human domain knowledge is normally limited. This work proposes a hybrid deep learning model called CNN-LSTM that employed Long Short-Term Memory (LSTM) networks for activity recognition with the Convolution Neural Network (CNN). The study makes use of HAR involving smartwatches to categorize hand movements. Using the study based on the Wireless Sensor Data Mining (WISDM) public benchmark dataset, the recognition abilities of the deep learning model can be accessed. The accuracy, precision, recall, and F-measure statistics are employed using the evaluation metrics to assess the recognition abilities of LSTM models proposed. The findings indicate that this hybrid deep learning model offers better performance than its rivals, where the achievement of 96.2% accuracy, while the f-measure is 96.3%, is obtained. The results show that the proposed CNN-LSTM can support an improvement of the performance of activity recognition.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4669
Author(s):  
Muhammad Awais ◽  
Lorenzo Chiari ◽  
Espen A. F. Ihlen ◽  
Jorunn L. Helbostad ◽  
Luca Palmerini

Physical activity has a strong influence on mental and physical health and is essential in healthy ageing and wellbeing for the ever-growing elderly population. Wearable sensors can provide a reliable and economical measure of activities of daily living (ADLs) by capturing movements through, e.g., accelerometers and gyroscopes. This study explores the potential of using classical machine learning and deep learning approaches to classify the most common ADLs: walking, sitting, standing, and lying. We validate the results on the ADAPT dataset, the most detailed dataset to date of inertial sensor data, synchronised with high frame-rate video labelled data recorded in a free-living environment from older adults living independently. The findings suggest that both approaches can accurately classify ADLs, showing high potential in profiling ADL patterns of the elderly population in free-living conditions. In particular, both long short-term memory (LSTM) networks and Support Vector Machines combined with ReliefF feature selection performed equally well, achieving around 97% F-score in profiling ADLs.


Author(s):  
M. Brandmeier ◽  
Y. Chen

<p><strong>Abstract.</strong> Deep learning has been used successfully in computer vision problems, e.g. image classification, target detection and many more. We use deep learning in conjunction with ArcGIS to implement a model with advanced convolutional neural networks (CNN) for lithological mapping in the Mount Isa region (Australia). The area is ideal for spectral remote sensing as there is only sparse vegetation and besides freely available Sentinel-2 and ASTER data, several geophysical datasets are available from exploration campaigns. By fusing the data and thus covering a wide spectral range as well as capturing geophysical properties of rocks, we aim at improving classification accuracies and support geological mapping. We also evaluate the performance of the sensors on their own compared to a joint use as the Sentinel-2 satellites are relatively new and as of now there exist only few studies for geological applications. We developed an end-to-end deep learning model using Keras and Tensorflow that consists of several convolutional, pooling and deconvolutional layers. Our model was inspired by the family of U-Net architectures, where low-level feature maps (encoders) are concatenated with high-level ones (decoders), which enables precise localization. This type of network architecture was especially designed to effectively solve pixel-wise classification problems, which is appropriate for lithological classification. We spatially resampled and fused the multi-sensor remote sensing data with different bands and geophysical data into image cubes as input for our model. Pre-processing was done in ArcGIS and the final, fine-tuned model was imported into a toolbox to be used on further scenes directly in the GIS environment. The tool classifies each pixel of the multiband imagery into different types of rocks according to a defined probability threshold. Results highlight the power of using Sentinel-2 in conjunction with ASTER data with accuracies of 75% in comparison to only 70% and 73% for ASTER or Sentinel-2 data alone. These results are similar but examining the different classes shows that there are significant improvements for classes such as dolerite or carbonate sediments that are not that widely distributed in the area. Adding geophysical datasets reduced accuracies to 60%, probably due to an order of magnitude difference in spatial resolution. In comparison, Random Forest (RF) and Support Vector Machines (SVMs) that were trained on the same data only achieve accuracies of 46 % and 36 % respectively. Most insecurity is due to labelling errors and labels with mixed lithologies. However, results show that the U-Netmodel is a powerful alternative to other classifiers for medium-resolution multispectral data.</p>


2020 ◽  
Author(s):  
Cezara Benegui ◽  
Radu Tudor Ionescu

<div>[Paper accepted at ACNS 2020]</div><div>In this paper, we propose a simple and effective attack on the recently introduced Smartphone Authentication with Built-in Camera Protocol, called ABC. The ABC protocol uses the photo-response non-uniformity (PRNU) as the main authentication factor in combination with anti-forgery detection systems. The ABC protocol interprets the PRNU as a fingerprint of the camera sensor built-in a smartphone device. The protocol works as follows: during the authentication process, the user is challenged with two QR codes (sent by the server) that need to be photographed with a pre-registered device. In each QR code, the server embeds a unique pattern noise (not visible to the naked eye), called probe signal, that is used to identify potential forgeries. The inserted probe signal is very similar to a genuine fingerprint. The photos of QR codes taken by the user are then sent to the server for verification. The server checks (i) if the photos contain the user's camera fingerprint (used to authenticate the pre-registered device) and (ii) if the photos contain the embedded probe signal. If an adversary tries to remove (subtract) his own camera fingerprint and replace it with the victim's camera fingerprint (computed from photos shared on social media), then he will implicitly remove the embedded probe signal and the attack will fail. The ABC protocol is able to detect these attacks with a false acceptance rate (FAR) of 0.5%. However, the ABC protocol wrongly assumes that the attacker can only determine his own camera fingerprint from the photos of the presented QR codes. The attack proposed in our work is able to get past the anti-forgery detection system with a FAR of 54.1%, simply by estimating the attacker's camera fingerprint from a different set of photos (e.g. five photos) owned by the attacker. This set of photos can be trivially obtained before the attack, allowing the adversary to compute his camera fingerprint independently of the attack. The key to the success of our attack is that the independently computed adversary's camera fingerprint does not contain the probe signal embedded in the QR codes. Therefore, when we subtract the adversary's camera fingerprint and add the victim's camera fingerprint, the embedded probe signal will remain in place. For this reason, the proposed attack can successfully pass through the anti-forgery detection system of the ABC protocol. In this paper, we also propose a potential fix based on analyzing signals from built-in motion sensors, which are not typically shared on social media.</div>


Sign in / Sign up

Export Citation Format

Share Document