Wearable Devices
Recently Published Documents





2021 ◽  
Vol 2021 ◽  
pp. 1-9
Semi Park ◽  
Riha Kim ◽  
Hyunsik Yoon ◽  
Kyungho Lee

With the development of IoT devices, wearable devices are being used to record various types of information. Wearable IoT devices are attached to the user and can collect and transmit user data at all times along with a smartphone. In particular, sensitive information such as location information has an essential value in terms of privacy, and therefore some IoT devices implement data protection by introducing methods such as masking. However, masking can only protect privacy to a certain extent in logs having large numbers of recorded data. However, the effectiveness may decrease if we are linked with other information collected from within the device. Herein, a scenario-based case study on deanonymizing anonymized location information based on logs stored in wearable devices is described. As a result, we combined contextual and direct evidence from the collected information. It was possible to obtain the result in which the user could effectively identify the actual location. Through this study, not only can a deanonymized user location be identified but we can also confirm that cross-validation is possible even when dealing with modified GPS coordinates.

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Sihua Sun

Audio scene recognition is a task that enables devices to understand their environment through digital audio analysis. It belongs to a branch of the field of computer auditory scene. At present, this technology has been widely used in intelligent wearable devices, robot sensing services, and other application scenarios. In order to explore the applicability of machine learning technology in the field of digital audio scene recognition, an audio scene recognition method based on optimized audio processing and convolutional neural network is proposed. Firstly, different from the traditional audio feature extraction method using mel-frequency cepstrum coefficient, the proposed method uses binaural representation and harmonic percussive source separation method to optimize the original audio and extract the corresponding features, so that the system can make use of the spatial features of the scene and then improve the recognition accuracy. Then, an audio scene recognition system with two-layer convolution module is designed and implemented. In terms of network structure, we try to learn from the VGGNet structure in the field of image recognition to increase the network depth and improve the system flexibility. Experimental data analysis shows that compared with traditional machine learning methods, the proposed method can greatly improve the recognition accuracy of each scene and achieve better generalization effect on different data.

2021 ◽  
Vol 27 (1) ◽  
Vinícius Ferreira Galvão ◽  
Cristiano Maciel ◽  
Roberto Pereira ◽  
Isabela Gasparini ◽  
José Viterbo ◽  

AbstractIntense social media interaction, wearable devices, mobile applications, and pervasive use of sensors have created a personal information ecosystem for gathering traces of individual behavior. These traces are the digital legacy individuals build all through their lives. Advances in artificial intelligence have fed our dream to build artificial agents trained with these digital traces to behave similarly to a deceased person, and individuals are facing the possibility of immortalizing their ideas, reasoning and behavior. Are people prepared for that? Are people willing to do that? How do people perceive the possibility of letting digital avatars take care of their digital legacy? This paper sheds light on these questions by discussing users’ perceptions towards digital immortality in a focus group analysis with 8 participants. Our findings suggest some key human values must be addressed. These findings can serve as preliminary thoughts to inform system design, from the very early stage of development, that preserve the digital legacy while respecting the human needs and values concerning the delicate emotional moment that death provides. This qualitative research analyzes the data, and based on the insights learned, proposes important considerations for future developments in this area.

10.2196/23359 ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. e23359
Thomas Carlin ◽  
Julie Soulard ◽  
Timothée Aubourg ◽  
Johannes Knitza ◽  
Nicolas Vuillerme

Background Axial spondyloarthritis (axSpA) is a subgroup of inflammatory rheumatic diseases. Practicing regular exercise is critical to manage pain and stiffness, reduce disease activity, and improve physical functioning, spinal mobility, and cardiorespiratory function. Accordingly, monitoring physical activity and sedentary behavior in patients with axSpA is relevant for clinical outcomes and disease management. Objective This review aims to determine which wearable devices, assessment methods, and associated metrics are commonly used to quantify physical activity or sedentary behavior in patients with axSpA. Methods The PubMed, Physiotherapy Evidence Database (PEDro), and Cochrane electronic databases will be searched, with no limit on publication date, to identify all the studies matching the inclusion criteria. Only original English-language articles published in a peer-reviewed journal will be included. The search strategy will include a combination of keywords related to the study population, wearable devices, physical activity, and sedentary behavior. We will use the Boolean operators “AND” and “OR” to combine keywords as well as Medical Subject Headings terms. Results Search strategy was completed in June 2020 with 23 records obtained. Data extraction and synthesis are currently ongoing. Dissemination of study results in peer-reviewed journals is expected at the end of 2021. Conclusions This review will provide a comprehensive and detailed synthesis of published studies that examine the use of wearable devices for objective assessment of physical activity and sedentary behavior in patients with axSpA. Trial Registration PROSPERO CRD42020182398; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=182398 International Registered Report Identifier (IRRID) PRR1-10.2196/23359

2021 ◽  
Liangliang Liu ◽  
Xin Yan

Abstract In recent years, capacitive flexible pressure sensors have been widely studied in electronic skin and wearable devices. The traditional capacitive pressure sensor has a higher production cost due to micro-nano machining technology such as lithography. This paper presents a flexible transparent capacitive pressure sensor based on a PDMS/CNT composite electrode, simple, transparent, flexible, and arrays without lithography. The sensitivity of the device has been tested to 0.0018 kpa -1 with a detection range of 0-30 kPa. The sensor is capable of rapidly detecting different pressures and remains stable after 100 load-unload tests.

2021 ◽  
Vikas Hasija ◽  
Erik G. Takhounts

Abstract Head kinematics information is very valuable as it is used to measure brain injury risk. Currently, head kinematics are measured using wearable devices or instrumentation mounted on the head. These instrumentation and wearable devices can have errors due to faulty sensors and due to relative motion between the wearable device and the respective body region. This paper proposes a novel method to predict the head kinematics directly from videos without any instrumentation using a deep learning approach. To prove the concept, a deep learning model was developed for predicting time history of head angular velocities and their respective peaks using Finite Element (FE) based crash simulation data. This FE dataset was split into training, validation, and test datasets. A combined Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) based deep learning model was developed using the training and validations sets. The test (unseen) dataset was used to evaluate the predictive capability of the deep learning model. On the test dataset, correlation coefficient obtained between the actual and predicted peak angular velocities was 0.73, 0.85, and 0.92 for X, Y, and Z components respectively.

Siqi Jiang ◽  
Oliver Stange ◽  
Fynn Ole Bätcke ◽  
Sabina Sultanova ◽  
Lilia Sabantina

Smart clothing is the next evolutionary step in wearable devices. It integrates electronics and textiles to create functional, stylish and comfortable solutions for people's daily needs. The concept includes not only clothing, which is a covering mechanism for the body but also has the function of tracking body indicators in certain situations. The review introduces the classification and concept of smart clothing, the application areas such as sports, workwear, healthcare, military and fashion. It will also outline the current state of smart clothing and the latest developments in the field, and discuss future developments and challenges.

2021 ◽  
Vol 3 ◽  
Julio Vega ◽  
Meng Li ◽  
Kwesi Aguillera ◽  
Nikunj Goel ◽  
Echhit Joshi ◽  

Smartphone and wearable devices are widely used in behavioral and clinical research to collect longitudinal data that, along with ground truth data, are used to create models of human behavior. Mobile sensing researchers often program data processing and analysis code from scratch even though many research teams collect data from similar mobile sensors, platforms, and devices. This leads to significant inefficiency in not being able to replicate and build on others' work, inconsistency in quality of code and results, and lack of transparency when code is not shared alongside publications. We provide an overview of Reproducible Analysis Pipeline for Data Streams (RAPIDS), a reproducible pipeline to standardize the preprocessing, feature extraction, analysis, visualization, and reporting of data streams coming from mobile sensors. RAPIDS is formed by a group of R and Python scripts that are executed on top of reproducible virtual environments, orchestrated by a workflow management system, and organized following a consistent file structure for data science projects. We share open source, documented, extensible and tested code to preprocess, extract, and visualize behavioral features from data collected with any Android or iOS smartphone sensing app as well as Fitbit and Empatica wearable devices. RAPIDS allows researchers to process mobile sensor data in a rigorous and reproducible way. This saves time and effort during the data analysis phase of a project and facilitates sharing analysis workflows alongside publications.

Sign in / Sign up

Export Citation Format

Share Document