scholarly journals Cigarette Smoking Detection with An Inertial Sensor and A Smart Lighter

Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 570 ◽  
Author(s):  
Volkan Senyurek ◽  
Masudul Imtiaz ◽  
Prajakta Belsare ◽  
Stephen Tiffany ◽  
Edward Sazonov

In recent years, a number of wearable approaches have been introduced for objective monitoring of cigarette smoking based on monitoring of hand gestures, breathing or cigarette lighting events. However, non-reactive, objective and accurate measurement of everyday cigarette consumption in the wild remains a challenge. This study utilizes a wearable sensor system (Personal Automatic Cigarette Tracker 2.0, PACT2.0) and proposes a method that integrates information from an instrumented lighter and a 6-axis Inertial Measurement Unit (IMU) on the wrist for accurate detection of smoking events. The PACT2.0 was utilized in a study of 35 moderate to heavy smokers in both controlled (1.5–2 h) and unconstrained free-living conditions (~24 h). The collected dataset contained approximately 871 h of IMU data, 463 lighting events, and 443 cigarettes. The proposed method identified smoking events from the cigarette lighter data and estimated puff counts by detecting hand-to-mouth gestures (HMG) in the IMU data by a Support Vector Machine (SVM) classifier. The leave-one-subject-out (LOSO) cross-validation on the data from the controlled portion of the study achieved high accuracy and F1-score of smoking event detection and estimation of puff counts (97%/98% and 93%/86%, respectively). The results of validation in free-living demonstrate 84.9% agreement with self-reported cigarettes. These results suggest that an IMU and instrumented lighter may potentially be used in studies of smoking behavior under natural conditions.

2021 ◽  
Vol 20 (1) ◽  
Author(s):  
Luciano Brinck Peres ◽  
Bruno Coelho Calil ◽  
Ana Paula Sousa Paixão Barroso da Silva ◽  
Valdeci Carlos Dionísio ◽  
Marcus Fraga Vieira ◽  
...  

Abstract Background Parkinson’s disease (PD) is a neurological disease that affects the motor system. The associated motor symptoms are muscle rigidity or stiffness, bradykinesia, tremors, and gait disturbances. The correct diagnosis, especially in the initial stages, is fundamental to the life quality of the individual with PD. However, the methods used for diagnosis of PD are still based on subjective criteria. As a result, the objective of this study is the proposal of a method for the discrimination of individuals with PD (in the initial stages of the disease) from healthy groups, based on the inertial sensor recordings. Methods A total of 27 participants were selected, 15 individuals previously diagnosed with PD and 12 healthy individuals. The data collection was performed using inertial sensors (positioned on the back of the hand and on the back of the forearm). Different numbers of features were used to compare the values of sensitivity, specificity, precision, and accuracy of the classifiers. For group classification, 4 classifiers were used and compared, those being [Random Forest (RF), Support Vector Machine (SVM), K-Nearest Neighbor (KNN), and Naive Bayes (NB)]. Results When all individuals with PD were analyzed, the best performance for sensitivity and accuracy (0.875 and 0.800, respectively) was found in the SVM classifier, fed with 20% and 10% of the features, respectively, while the best performance for specificity and precision (0.933 and 0.917, respectively) was associated with the RF classifier fed with 20% of all the features. When only individuals with PD and score 1 on the Hoehn and Yahr scale (HY) were analyzed, the best performances for sensitivity, precision and accuracy (0.933, 0.778 and 0.848, respectively) were from the SVM classifier, fed with 40% of all features, and the best result for precision (0.800) was connected to the NB classifier, fed with 20% of all features. Conclusion Through an analysis of all individuals in this study with PD, the best classifier for the detection of PD (sensitivity) was the SVM fed with 20% of the features and the best classifier for ruling out PD (specificity) was the RF classifier fed with 20% of the features. When analyzing individuals with PD and score HY = 1, the SVM classifier was superior across the sensitivity, precision, and accuracy, and the NB classifier was superior in the specificity. The obtained result indicates that objective methods can be applied to help in the evaluation of PD.


2020 ◽  
Vol 10 (12) ◽  
pp. 4213
Author(s):  
Anna Borowska-Terka ◽  
Pawel Strumillo

Numerous applications of human–machine interfaces, e.g., dedicated to persons with disabilities, require contactless handling of devices or systems. The purpose of this research is to develop a hands-free head-gesture-controlled interface that can support persons with disabilities to communicate with other people and devices, e.g., the paralyzed to signal messages or the visually impaired to handle travel aids. The hardware of the interface consists of a small stereovision rig with a built-in inertial measurement unit (IMU). The device is to be positioned on a user’s forehead. Two approaches to recognize head movements were considered. In the first approach, for various time window sizes of the signals recorded from a three-axis accelerometer and a three-axis gyroscope, statistical parameters were calculated such as: average, minimum and maximum amplitude, standard deviation, kurtosis, correlation coefficient, and signal energy. For the second approach, the focus was put onto direct analysis of signal samples recorded from the IMU. In both approaches, the accuracies of 16 different data classifiers for distinguishing the head movements: pitch, roll, yaw, and immobility were evaluated. The recordings of head gestures were collected from 65 individuals. The best results for the testing data were obtained for the non-parametric approach, i.e., direct classification of unprocessed samples of IMU signals for Support Vector Machine (SVM) classifier (95% correct recognitions). Slightly worse results, in this approach, were obtained for the random forests classifier (93%). The achieved high recognition rates of the head gestures suggest that a person with physical or sensory disability can efficiently communicate with other people or manage applications using simple head gesture sequences.


2019 ◽  
Vol 22 (10) ◽  
pp. 1883-1890 ◽  
Author(s):  
Masudul H Imtiaz ◽  
Delwar Hossain ◽  
Volkan Y Senyurek ◽  
Prajakta Belsare ◽  
Stephen Tiffany ◽  
...  

Abstract Introduction Wearable sensors may be used for the assessment of behavioral manifestations of cigarette smoking under natural conditions. This paper introduces a new camera-based sensor system to monitor smoking behavior. The goals of this study were (1) identification of the best position of sensor placement on the body and (2) feasibility evaluation of the sensor as a free-living smoking-monitoring tool. Methods A sensor system was developed with a 5MP camera that captured images every second for continuously up to 26 hours. Five on-body locations were tested for the selection of sensor placement. A feasibility study was then performed on 10 smokers to monitor full-day smoking under free-living conditions. Captured images were manually annotated to obtain behavioral metrics of smoking including smoking frequency, smoking environment, and puffs per cigarette. The smoking environment and puff counts captured by the camera were compared with self-reported smoking. Results A camera located on the eyeglass temple produced the maximum number of images of smoking and the minimal number of blurry or overexposed images (53.9%, 4.19%, and 0.93% of total captured, respectively). During free-living conditions, 286,245 images were captured with a mean (±standard deviation) duration of sensor wear of 647(±74) minutes/participant. Image annotation identified consumption of 5(±2.3) cigarettes/participant, 3.1(±1.1) cigarettes/participant indoors, 1.9(±0.9) cigarettes/participant outdoors, and 9.02(±2.5) puffs/cigarette. Statistical tests found significant differences between manual annotations and self-reported smoking environment or puff counts. Conclusions A wearable camera-based sensor may facilitate objective monitoring of cigarette smoking, categorization of smoking environments, and identification of behavioral metrics of smoking in free-living conditions. Implications The proposed camera-based sensor system can be employed to examine cigarette smoking under free-living conditions. Smokers may accept this unobtrusive sensor for extended wear, as the sensor would not restrict the natural pattern of smoking or daily activities, nor would it require any active participation from a person except wearing it. Critical metrics of smoking behavior, such as the smoking environment and puff counts obtained from this sensor, may generate important information for smoking interventions.


2020 ◽  
Vol 143 (3) ◽  
Author(s):  
Michael J. Rose ◽  
Katherine A. McCollum ◽  
Michael T. Freehill ◽  
Stephen M. Cain

Abstract Overuse injuries in youth baseball players due to throwing are at an all-time high. Traditional methods of tracking player throwing load only count in-game pitches and therefore leave many throws unaccounted for. Miniature wearable inertial sensors can be used to capture motion data outside of the lab in a field setting. The objective of this study was to develop a protocol and algorithms to detect throws and classify throw intensity in youth baseball athletes using a single, upper arm-mounted inertial sensor. Eleven participants from a youth baseball team were recruited to participate in the study. Each participant was given an inertial measurement unit (IMU) and was instructed to wear the sensor during any baseball activity for the duration of a summer season of baseball. A throw identification algorithm was developed using data from a controlled data collection trial. In this report, we present the throw identification algorithm used to identify over 17,000 throws during the 2-month duration of the study. Data from a second controlled experiment were used to build a support vector machine model to classify throw intensity. Using this classification algorithm, throws from all participants were classified as being “low,” “medium,” or “high” intensity. The results demonstrate that there is value in using sensors to count every throw an athlete makes when assessing throwing load, not just in-game pitches.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4669
Author(s):  
Muhammad Awais ◽  
Lorenzo Chiari ◽  
Espen A. F. Ihlen ◽  
Jorunn L. Helbostad ◽  
Luca Palmerini

Physical activity has a strong influence on mental and physical health and is essential in healthy ageing and wellbeing for the ever-growing elderly population. Wearable sensors can provide a reliable and economical measure of activities of daily living (ADLs) by capturing movements through, e.g., accelerometers and gyroscopes. This study explores the potential of using classical machine learning and deep learning approaches to classify the most common ADLs: walking, sitting, standing, and lying. We validate the results on the ADAPT dataset, the most detailed dataset to date of inertial sensor data, synchronised with high frame-rate video labelled data recorded in a free-living environment from older adults living independently. The findings suggest that both approaches can accurately classify ADLs, showing high potential in profiling ADL patterns of the elderly population in free-living conditions. In particular, both long short-term memory (LSTM) networks and Support Vector Machines combined with ReliefF feature selection performed equally well, achieving around 97% F-score in profiling ADLs.


10.2196/30135 ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. e30135
Author(s):  
Yu-Cheng Hsu ◽  
Hailiang Wang ◽  
Yang Zhao ◽  
Frank Chen ◽  
Kwok-Leung Tsui

Background Clinical mobility and balance assessments identify older adults who have a high risk of falls in clinics. In the past two decades, sensors have been a popular supplement to mobility and balance assessment to provide quantitative information and a cost-effective solution in the community environment. Nonetheless, the current sensor-based balance assessment relies on manual observation or motion-specific features to identify motions of research interest. Objective The objective of this study was to develop an automatic motion data analytics framework using signal data collected from an inertial sensor for balance activity analysis in community-dwelling older adults. Methods In total, 59 community-dwelling older adults (19 males and 40 females; mean age = 81.86 years, SD 6.95 years) were recruited in this study. Data were collected using a body-worn inertial measurement unit (including an accelerometer and a gyroscope) at the L4 vertebra of each individual. After data preprocessing and motion detection via a convolutional long short-term memory (LSTM) neural network, a one-class support vector machine (SVM), linear discriminant analysis (LDA), and k-nearest neighborhood (k-NN) were adopted to classify high-risk individuals. Results The framework developed in this study yielded mean accuracies of 87%, 86%, and 89% in detecting sit-to-stand, turning 360°, and stand-to-sit motions, respectively. The balance assessment classification showed accuracies of 90%, 92%, and 86% in classifying abnormal sit-to-stand, turning 360°, and stand-to-sit motions, respectively, using Tinetti Performance Oriented Mobility Assessment-Balance (POMA-B) criteria by the one-class SVM and k-NN. Conclusions The sensor-based approach presented in this study provided a time-effective manner with less human efforts to identify and preprocess the inertial signal and thus enabled an efficient balance assessment tool for medical professionals. In the long run, the approach may offer a flexible solution to relieve the community’s burden of continuous health monitoring.


Electronics ◽  
2017 ◽  
Vol 6 (4) ◽  
pp. 104 ◽  
Author(s):  
Masudul Imtiaz ◽  
Raul Ramos-Garcia ◽  
Volkan Senyurek ◽  
Stephen Tiffany ◽  
Edward Sazonov

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 412
Author(s):  
Luigi Borzì ◽  
Ivan Mazzetta ◽  
Alessandro Zampogna ◽  
Antonio Suppa ◽  
Fernanda Irrera ◽  
...  

Background: Current telemedicine approaches lack standardised procedures for the remote assessment of axial impairment in Parkinson’s disease (PD). Unobtrusive wearable sensors may be a feasible tool to provide clinicians with practical medical indices reflecting axial dysfunction in PD. This study aims to predict the postural instability/gait difficulty (PIGD) score in PD patients by monitoring gait through a single inertial measurement unit (IMU) and machine-learning algorithms. Methods: Thirty-one PD patients underwent a 7-m timed-up-and-go test while monitored through an IMU placed on the thigh, both under (ON) and not under (OFF) dopaminergic therapy. After pre-processing procedures and feature selection, a support vector regression model was implemented to predict PIGD scores and to investigate the impact of L-Dopa and freezing of gait (FOG) on regression models. Results: Specific time- and frequency-domain features correlated with PIGD scores. After optimizing the dimensionality reduction methods and the model parameters, regression algorithms demonstrated different performance in the PIGD prediction in patients OFF and ON therapy (r = 0.79 and 0.75 and RMSE = 0.19 and 0.20, respectively). Similarly, regression models showed different performances in the PIGD prediction, in patients with FOG, ON and OFF therapy (r = 0.71 and RMSE = 0.27; r = 0.83 and RMSE = 0.22, respectively) and in those without FOG, ON and OFF therapy (r = 0.85 and RMSE = 0.19; r = 0.79 and RMSE = 0.21, respectively). Conclusions: Optimized support vector regression models have high feasibility in predicting PIGD scores in PD. L-Dopa and FOG affect regression model performances. Overall, a single inertial sensor may help to remotely assess axial motor impairment in PD patients.


Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 4008 ◽  
Author(s):  
Henry Griffith ◽  
Yan Shi ◽  
Subir Biswas

Various sensors have been proposed to address the negative health ramifications of inadequate fluid consumption. Amongst these solutions, motion-based sensors estimate fluid intake using the characteristics of drinking kinematics. This sensing approach is complicated due to the mutual influence of both the drink volume and the current fill level on the resulting motion pattern, along with differences in biomechanics across individuals. While motion-based strategies are a promising approach due to the proliferation of inertial sensors, previous studies have been characterized by limited accuracy and substantial variability in performance across subjects. This research seeks to address these limitations for a container-attachable triaxial accelerometer sensor. Drink volume is computed using support vector machine regression models with hand-engineered features describing the container’s estimated inclination. Results are presented for a large-scale data collection consisting of 1908 drinks consumed from a refillable bottle by 84 individuals. Per-drink mean absolute percentage error is reduced by 11.05% versus previous state-of-the-art results for a single wrist-wearable inertial measurement unit (IMU) sensor assessed using a similar experimental protocol. Estimates of aggregate consumption are also improved versus previously reported results for an attachable sensor architecture. An alternative tracking approach using the fill level from which a drink is consumed is also explored herein. Fill level regression models are shown to exhibit improved accuracy and reduced inter-subject variability versus volume estimators. A technique for segmenting the entire drink motion sequence into transport and sip phases is also assessed, along with a multi-target framework for addressing the known interdependence of volume and fill level on the resulting drink motion signature.


2019 ◽  
Vol 9 (20) ◽  
pp. 4397 ◽  
Author(s):  
Soad Almabdy ◽  
Lamiaa Elrefaei

Face recognition (FR) is defined as the process through which people are identified using facial images. This technology is applied broadly in biometrics, security information, accessing controlled areas, keeping of the law by different enforcement bodies, smart cards, and surveillance technology. The facial recognition system is built using two steps. The first step is a process through which the facial features are picked up or extracted, and the second step is pattern classification. Deep learning, specifically the convolutional neural network (CNN), has recently made commendable progress in FR technology. This paper investigates the performance of the pre-trained CNN with multi-class support vector machine (SVM) classifier and the performance of transfer learning using the AlexNet model to perform classification. The study considers CNN architecture, which has so far recorded the best outcome in the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) in the past years, more specifically, AlexNet and ResNet-50. In order to determine performance optimization of the CNN algorithm, recognition accuracy was used as a determinant. Improved classification rates were seen in the comprehensive experiments that were completed on the various datasets of ORL, GTAV face, Georgia Tech face, labelled faces in the wild (LFW), frontalized labeled faces in the wild (F_LFW), YouTube face, and FEI faces. The result showed that our model achieved a higher accuracy compared to most of the state-of-the-art models. An accuracy range of 94% to 100% for models with all databases was obtained. Also, this was obtained with an improvement in recognition accuracy up to 39%.


Sign in / Sign up

Export Citation Format

Share Document