scholarly journals Fatigue Monitoring Through Wearables: A State-of-the-Art Review

2021 ◽  
Vol 12 ◽  
Author(s):  
Neusa R. Adão Martins ◽  
Simon Annaheim ◽  
Christina M. Spengler ◽  
René M. Rossi

The objective measurement of fatigue is of critical relevance in areas such as occupational health and safety as fatigue impairs cognitive and motor performance, thus reducing productivity and increasing the risk of injury. Wearable systems represent highly promising solutions for fatigue monitoring as they enable continuous, long-term monitoring of biomedical signals in unattended settings, with the required comfort and non-intrusiveness. This is a p rerequisite for the development of accurate models for fatigue monitoring in real-time. However, monitoring fatigue through wearable devices imposes unique challenges. To provide an overview of the current state-of-the-art in monitoring variables associated with fatigue via wearables and to detect potential gaps and pitfalls in current knowledge, a systematic review was performed. The Scopus and PubMed databases were searched for articles published in English since 2015, having the terms “fatigue,” “drowsiness,” “vigilance,” or “alertness” in the title, and proposing wearable device-based systems for non-invasive fatigue quantification. Of the 612 retrieved articles, 60 satisfied the inclusion criteria. Included studies were mainly of short duration and conducted in laboratory settings. In general, researchers developed fatigue models based on motion (MOT), electroencephalogram (EEG), photoplethysmogram (PPG), electrocardiogram (ECG), galvanic skin response (GSR), electromyogram (EMG), skin temperature (Tsk), eye movement (EYE), and respiratory (RES) data acquired by wearable devices available in the market. Supervised machine learning models, and more specifically, binary classification models, are predominant among the proposed fatigue quantification approaches. These models were considered to perform very well in detecting fatigue, however, little effort was made to ensure the use of high-quality data during model development. Together, the findings of this review reveal that methodological limitations have hindered the generalizability and real-world applicability of most of the proposed fatigue models. Considerably more work is needed to fully explore the potential of wearables for fatigue quantification as well as to better understand the relationship between fatigue and changes in physiological variables.

Atmosphere ◽  
2020 ◽  
Vol 11 (7) ◽  
pp. 701
Author(s):  
Bong-Chul Seo

This study describes a framework that provides qualitative weather information on winter precipitation types using a data-driven approach. The framework incorporates the data retrieved from weather radars and the numerical weather prediction (NWP) model to account for relevant precipitation microphysics. To enable multimodel-based ensemble classification, we selected six supervised machine learning models: k-nearest neighbors, logistic regression, support vector machine, decision tree, random forest, and multi-layer perceptron. Our model training and cross-validation results based on Monte Carlo Simulation (MCS) showed that all the models performed better than our baseline method, which applies two thresholds (surface temperature and atmospheric layer thickness) for binary classification (i.e., rain/snow). Among all six models, random forest presented the best classification results for the basic classes (rain, freezing rain, and snow) and the further refinement of the snow classes (light, moderate, and heavy). Our model evaluation, which uses an independent dataset not associated with model development and learning, led to classification performance consistent with that from the MCS analysis. Based on the visual inspection of the classification maps generated for an individual radar domain, we confirmed the improved classification capability of the developed models (e.g., random forest) compared to the baseline one in representing both spatial variability and continuity.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Matvey Ezhov ◽  
Maxim Gusarev ◽  
Maria Golitsyna ◽  
Julian M. Yates ◽  
Evgeny Kushnerev ◽  
...  

AbstractIn this study, a novel AI system based on deep learning methods was evaluated to determine its real-time performance of CBCT imaging diagnosis of anatomical landmarks, pathologies, clinical effectiveness, and safety when used by dentists in a clinical setting. The system consists of 5 modules: ROI-localization-module (segmentation of teeth and jaws), tooth-localization and numeration-module, periodontitis-module, caries-localization-module, and periapical-lesion-localization-module. These modules use CNN based on state-of-the-art architectures. In total, 1346 CBCT scans were used to train the modules. After annotation and model development, the AI system was tested for diagnostic capabilities of the Diagnocat AI system. 24 dentists participated in the clinical evaluation of the system. 30 CBCT scans were examined by two groups of dentists, where one group was aided by Diagnocat and the other was unaided. The results for the overall sensitivity and specificity for aided and unaided groups were calculated as an aggregate of all conditions. The sensitivity values for aided and unaided groups were 0.8537 and 0.7672 while specificity was 0.9672 and 0.9616 respectively. There was a statistically significant difference between the groups (p = 0.032). This study showed that the proposed AI system significantly improved the diagnostic capabilities of dentists.


2021 ◽  
Vol 13 (9) ◽  
pp. 1623
Author(s):  
João E. Batista ◽  
Ana I. R. Cabral ◽  
Maria J. P. Vasconcelos ◽  
Leonardo Vanneschi ◽  
Sara Silva

Genetic programming (GP) is a powerful machine learning (ML) algorithm that can produce readable white-box models. Although successfully used for solving an array of problems in different scientific areas, GP is still not well known in the field of remote sensing. The M3GP algorithm, a variant of the standard GP algorithm, performs feature construction by evolving hyperfeatures from the original ones. In this work, we use the M3GP algorithm on several sets of satellite images over different countries to create hyperfeatures from satellite bands to improve the classification of land cover types. We add the evolved hyperfeatures to the reference datasets and observe a significant improvement of the performance of three state-of-the-art ML algorithms (decision trees, random forests, and XGBoost) on multiclass classifications and no significant effect on the binary classifications. We show that adding the M3GP hyperfeatures to the reference datasets brings better results than adding the well-known spectral indices NDVI, NDWI, and NBR. We also compare the performance of the M3GP hyperfeatures in the binary classification problems with those created by other feature construction methods such as FFX and EFS.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1614
Author(s):  
Jonghun Jeong ◽  
Jong Sung Park ◽  
Hoeseok Yang

Recently, the necessity to run high-performance neural networks (NN) is increasing even in resource-constrained embedded systems such as wearable devices. However, due to the high computational and memory requirements of the NN applications, it is typically infeasible to execute them on a single device. Instead, it has been proposed to run a single NN application cooperatively on top of multiple devices, a so-called distributed neural network. In the distributed neural network, workloads of a single big NN application are distributed over multiple tiny devices. While the computation overhead could effectively be alleviated by this approach, the existing distributed NN techniques, such as MoDNN, still suffer from large traffics between the devices and vulnerability to communication failures. In order to get rid of such big communication overheads, a knowledge distillation based distributed NN, called Network of Neural Networks (NoNN), was proposed, which partitions the filters in the final convolutional layer of the original NN into multiple independent subsets and derives smaller NNs out of each subset. However, NoNN also has limitations in that the partitioning result may be unbalanced and it considerably compromises the correlation between filters in the original NN, which may result in an unacceptable accuracy degradation in case of communication failure. In this paper, in order to overcome these issues, we propose to enhance the partitioning strategy of NoNN in two aspects. First, we enhance the redundancy of the filters that are used to derive multiple smaller NNs by means of averaging to increase the immunity of the distributed NN to communication failure. Second, we propose a novel partitioning technique, modified from Eigenvector-based partitioning, to preserve the correlation between filters as much as possible while keeping the consistent number of filters distributed to each device. Throughout extensive experiments with the CIFAR-100 (Canadian Institute For Advanced Research-100) dataset, it has been observed that the proposed approach maintains high inference accuracy (over 70%, 1.53× improvement over the state-of-the-art approach), on average, even when a half of eight devices in a distributed NN fail to deliver their partial inference results.


2021 ◽  
Vol 13 (2) ◽  
pp. 723
Author(s):  
Antti Kurvinen ◽  
Arto Saari ◽  
Juhani Heljo ◽  
Eero Nippala

It is widely agreed that dynamics of building stocks are relatively poorly known even if it is recognized to be an important research topic. Better understanding of building stock dynamics and future development is crucial, e.g., for sustainable management of the built environment as various analyses require long-term projections of building stock development. Recognizing the uncertainty in relation to long-term modeling, we propose a transparent calculation-based QuantiSTOCK model for modeling building stock development. Our approach not only provides a tangible tool for understanding development when selected assumptions are valid but also, most importantly, allows for studying the sensitivity of results to alternative developments of the key variables. Therefore, this relatively simple modeling approach provides fruitful grounds for understanding the impact of different key variables, which is needed to facilitate meaningful debate on different housing, land use, and environment-related policies. The QuantiSTOCK model may be extended in numerous ways and lays the groundwork for modeling the future developments of building stocks. The presented model may be used in a wide range of analyses ranging from assessing housing demand at the regional level to providing input for defining sustainable pathways towards climate targets. Due to the availability of high-quality data, the Finnish building stock provided a great test arena for the model development.


Author(s):  
Sebastian Hoppe Nesgaard Jensen ◽  
Mads Emil Brix Doest ◽  
Henrik Aanæs ◽  
Alessio Del Bue

AbstractNon-rigid structure from motion (nrsfm), is a long standing and central problem in computer vision and its solution is necessary for obtaining 3D information from multiple images when the scene is dynamic. A main issue regarding the further development of this important computer vision topic, is the lack of high quality data sets. We here address this issue by presenting a data set created for this purpose, which is made publicly available, and considerably larger than the previous state of the art. To validate the applicability of this data set, and provide an investigation into the state of the art of nrsfm, including potential directions forward, we here present a benchmark and a scrupulous evaluation using this data set. This benchmark evaluates 18 different methods with available code that reasonably spans the state of the art in sparse nrsfm. This new public data set and evaluation protocol will provide benchmark tools for further development in this challenging field.


2020 ◽  
pp. 1-21 ◽  
Author(s):  
Clément Dalloux ◽  
Vincent Claveau ◽  
Natalia Grabar ◽  
Lucas Emanuel Silva Oliveira ◽  
Claudia Maria Cabral Moro ◽  
...  

Abstract Automatic detection of negated content is often a prerequisite in information extraction systems in various domains. In the biomedical domain especially, this task is important because negation plays an important role. In this work, two main contributions are proposed. First, we work with languages which have been poorly addressed up to now: Brazilian Portuguese and French. Thus, we developed new corpora for these two languages which have been manually annotated for marking up the negation cues and their scope. Second, we propose automatic methods based on supervised machine learning approaches for the automatic detection of negation marks and of their scopes. The methods show to be robust in both languages (Brazilian Portuguese and French) and in cross-domain (general and biomedical languages) contexts. The approach is also validated on English data from the state of the art: it yields very good results and outperforms other existing approaches. Besides, the application is accessible and usable online. We assume that, through these issues (new annotated corpora, application accessible online, and cross-domain robustness), the reproducibility of the results and the robustness of the NLP applications will be augmented.


Minerals ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 621
Author(s):  
Elaheh Talebi ◽  
W. Pratt Rogers ◽  
Tyler Morgan ◽  
Frank A. Drews

Mine workers operate heavy equipment while experiencing varying psychological and physiological impacts caused by fatigue. These impacts vary in scope and severity across operators and unique mine operations. Previous studies show the impact of fatigue on individuals, raising substantial concerns about the safety of operation. Unfortunately, while data exist to illustrate the risks, the mechanisms and complex pattern of contributors to fatigue are not understood sufficiently, illustrating the need for new methods to model and manage the severity of fatigue’s impact on performance and safety. Modern technology and computational intelligence can provide tools to improve practitioners’ understanding of workforce fatigue. Many mines have invested in fatigue monitoring technology (PERCLOS, EEG caps, etc.) as a part of their health and safety control system. Unfortunately, these systems provide “lagging indicators” of fatigue and, in many instances, only provide fatigue alerts too late in the worker fatigue cycle. Thus, the following question arises: can other operational technology systems provide leading indicators that managers and front-line supervisors can use to help their operators to cope with fatigue levels? This paper explores common data sets available at most modern mines and how these operational data sets can be used to model fatigue. The available data sets include operational, health and safety, equipment health, fatigue monitoring and weather data. A machine learning (ML) algorithm is presented as a tool to process and model complex issues such as fatigue. Thus, ML is used in this study to identify potential leading indicators that can help management to make better decisions. Initial findings confirm existing knowledge tying fatigue to time of day and hours worked. These are the first generation of models and future models will be forthcoming.


Sign in / Sign up

Export Citation Format

Share Document