scholarly journals Counting Bites With Bits: Expert Workshop Addressing Calorie and Macronutrient Intake Monitoring (Preprint)

2019 ◽  
Author(s):  
Nabil Alshurafa ◽  
Annie Wen Lin ◽  
Fengqing Zhu ◽  
Roozbeh Ghaffari ◽  
Josiah Hester ◽  
...  

BACKGROUND Conventional diet assessment approaches such as the 24-hour self-reported recall are burdensome, suffer from recall bias, and are inaccurate in estimating energy intake. Wearable sensor technology, coupled with advanced algorithms, is increasingly showing promise in its ability to capture behaviors that provide useful information for estimating calorie and macronutrient intake. OBJECTIVE This paper aimed to summarize current technological approaches to monitoring energy intake on the basis of expert opinion from a workshop panel and to make recommendations to advance technology and algorithms to improve estimation of energy expenditure. METHODS A 1-day invitational workshop sponsored by the National Science Foundation was held at Northwestern University. A total of 30 participants, including population health researchers, engineers, and intervention developers, from 6 universities and the National Institutes of Health participated in a panel discussing the state of evidence with regard to monitoring calorie intake and eating behaviors. RESULTS Calorie monitoring using technological approaches can be characterized into 3 domains: (1) image-based sensing (eg, wearable and smartphone-based cameras combined with machine learning algorithms); (2) eating action unit (EAU) sensors (eg, to measure feeding gesture and chewing rate); and (3) biochemical measures (eg, serum and plasma metabolite concentrations). We discussed how each domain functions, provided examples of promising solutions, and highlighted potential challenges and opportunities in each domain. Image-based sensor research requires improved ground truth (context and known information about the foods), accurate food image segmentation and recognition algorithms, and reliable methods of estimating portion size. EAU-based domain research is limited by the understanding of when their systems (device and inference algorithm) succeed and fail, need for privacy-protecting methods of capturing ground truth, and uncertainty in food categorization. Although an exciting novel technology, the challenges of biochemical sensing range from a lack of adaptability to environmental effects (eg, temperature change) and mechanical impact, instability of wearable sensor performance over time, and single-use design. CONCLUSIONS Conventional approaches to calorie monitoring rely predominantly on self-reports. These approaches can gain contextual information from image-based and EAU-based domains that can map automatically captured food images to a food database and detect proxies that correlate with food volume and caloric intake. Although the continued development of advanced machine learning techniques will advance the accuracy of such wearables, biochemical sensing provides an electrochemical analysis of sweat using soft bioelectronics on human skin, enabling noninvasive measures of chemical compounds that provide insight into the digestive and endocrine systems. Future computing-based researchers should focus on reducing the burden of wearable sensors, aligning data across multiple devices, automating methods of data annotation, increasing rigor in studying system acceptability, increasing battery lifetime, and rigorously testing validity of the measure. Such research requires moving promising technological solutions from the controlled laboratory setting to the field.

10.2196/14904 ◽  
2019 ◽  
Vol 21 (12) ◽  
pp. e14904 ◽  
Author(s):  
Nabil Alshurafa ◽  
Annie Wen Lin ◽  
Fengqing Zhu ◽  
Roozbeh Ghaffari ◽  
Josiah Hester ◽  
...  

Background Conventional diet assessment approaches such as the 24-hour self-reported recall are burdensome, suffer from recall bias, and are inaccurate in estimating energy intake. Wearable sensor technology, coupled with advanced algorithms, is increasingly showing promise in its ability to capture behaviors that provide useful information for estimating calorie and macronutrient intake. Objective This paper aimed to summarize current technological approaches to monitoring energy intake on the basis of expert opinion from a workshop panel and to make recommendations to advance technology and algorithms to improve estimation of energy expenditure. Methods A 1-day invitational workshop sponsored by the National Science Foundation was held at Northwestern University. A total of 30 participants, including population health researchers, engineers, and intervention developers, from 6 universities and the National Institutes of Health participated in a panel discussing the state of evidence with regard to monitoring calorie intake and eating behaviors. Results Calorie monitoring using technological approaches can be characterized into 3 domains: (1) image-based sensing (eg, wearable and smartphone-based cameras combined with machine learning algorithms); (2) eating action unit (EAU) sensors (eg, to measure feeding gesture and chewing rate); and (3) biochemical measures (eg, serum and plasma metabolite concentrations). We discussed how each domain functions, provided examples of promising solutions, and highlighted potential challenges and opportunities in each domain. Image-based sensor research requires improved ground truth (context and known information about the foods), accurate food image segmentation and recognition algorithms, and reliable methods of estimating portion size. EAU-based domain research is limited by the understanding of when their systems (device and inference algorithm) succeed and fail, need for privacy-protecting methods of capturing ground truth, and uncertainty in food categorization. Although an exciting novel technology, the challenges of biochemical sensing range from a lack of adaptability to environmental effects (eg, temperature change) and mechanical impact, instability of wearable sensor performance over time, and single-use design. Conclusions Conventional approaches to calorie monitoring rely predominantly on self-reports. These approaches can gain contextual information from image-based and EAU-based domains that can map automatically captured food images to a food database and detect proxies that correlate with food volume and caloric intake. Although the continued development of advanced machine learning techniques will advance the accuracy of such wearables, biochemical sensing provides an electrochemical analysis of sweat using soft bioelectronics on human skin, enabling noninvasive measures of chemical compounds that provide insight into the digestive and endocrine systems. Future computing-based researchers should focus on reducing the burden of wearable sensors, aligning data across multiple devices, automating methods of data annotation, increasing rigor in studying system acceptability, increasing battery lifetime, and rigorously testing validity of the measure. Such research requires moving promising technological solutions from the controlled laboratory setting to the field.


2016 ◽  
Author(s):  
Sohrab Saeb ◽  
Luca Lonini ◽  
Arun Jayaraman ◽  
David C. Mohr ◽  
Konrad P. Kording

AbstractThe availability of smartphone and wearable sensor technology is leading to a rapid accumulation of human subject data, and machine learning is emerging as a technique to map that data into clinical predictions. As machine learning algorithms are increasingly used to support clinical decision making, it is important to reliably quantify their prediction accuracy. Cross-validation is the standard approach for evaluating the accuracy of such algorithms; however, several cross-validations methods exist and only some of them are statistically meaningful. Here we compared two popular cross-validation methods: record-wise and subject-wise. Using both a publicly available dataset and a simulation, we found that record-wise cross-validation often massively overestimates the prediction accuracy of the algorithms. We also found that this erroneous method is used by almost half of the retrieved studies that used accelerometers, wearable sensors, or smartphones to predict clinical outcomes. As we move towards an era of machine learning based diagnosis and treatment, using proper methods to evaluate their accuracy is crucial, as erroneous results can mislead both clinicians and data scientists.


2021 ◽  
Vol 8 (1) ◽  
pp. 205395172110135
Author(s):  
Florian Jaton

This theoretical paper considers the morality of machine learning algorithms and systems in the light of the biases that ground their correctness. It begins by presenting biases not as a priori negative entities but as contingent external referents—often gathered in benchmarked repositories called ground-truth datasets—that define what needs to be learned and allow for performance measures. I then argue that ground-truth datasets and their concomitant practices—that fundamentally involve establishing biases to enable learning procedures—can be described by their respective morality, here defined as the more or less accounted experience of hesitation when faced with what pragmatist philosopher William James called “genuine options”—that is, choices to be made in the heat of the moment that engage different possible futures. I then stress three constitutive dimensions of this pragmatist morality, as far as ground-truthing practices are concerned: (I) the definition of the problem to be solved (problematization), (II) the identification of the data to be collected and set up (databasing), and (III) the qualification of the targets to be learned (labeling). I finally suggest that this three-dimensional conceptual space can be used to map machine learning algorithmic projects in terms of the morality of their respective and constitutive ground-truthing practices. Such techno-moral graphs may, in turn, serve as equipment for greater governance of machine learning algorithms and systems.


Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1029 ◽  
Author(s):  
Thomas Kundinger ◽  
Nikoletta Sofra ◽  
Andreas Riener

Drowsy driving imposes a high safety risk. Current systems often use driving behavior parameters for driver drowsiness detection. The continuous driving automation reduces the availability of these parameters, therefore reducing the scope of such methods. Especially, techniques that include physiological measurements seem to be a promising alternative. However, in a dynamic environment such as driving, only non- or minimal intrusive methods are accepted, and vibrations from the roadbed could lead to degraded sensor technology. This work contributes to driver drowsiness detection with a machine learning approach applied solely to physiological data collected from a non-intrusive retrofittable system in the form of a wrist-worn wearable sensor. To check accuracy and feasibility, results are compared with reference data from a medical-grade ECG device. A user study with 30 participants in a high-fidelity driving simulator was conducted. Several machine learning algorithms for binary classification were applied in user-dependent and independent tests. Results provide evidence that the non-intrusive setting achieves a similar accuracy as compared to the medical-grade device, and high accuracies (>92%) could be achieved, especially in a user-dependent scenario. The proposed approach offers new possibilities for human–machine interaction in a car and especially for driver state monitoring in the field of automated driving.


2016 ◽  
Vol 15 (1) ◽  
pp. 59-63
Author(s):  
Morgan Stuart

Abstract Sports informatics and computer science in sport are perhaps the most exciting and fast-moving disciplines across all of sports science. The tremendous parallel growth in digital technology, non-invasive sensor devices, computer vision and machine learning have empowered sports analytics in ways perhaps never seen before. This growth provides great challenges for new entrants and seasoned veterans of sports analytics alike. Keeping pace with new technological innovations requires a thorough and systematic understanding of many diverse topics from computer programming, to database design, machine learning algorithms and sensor technology. Nevertheless, as quickly as the state of the art technology changes, the foundation skills and knowledge about computer science in sport are lasting. Furthermore, resources for students and practitioners across this range of areas are scarce, and the new-release textbook Computer Science in Sport: Research and Practice edited by Professor Arnold Baca, provides much of the foundation knowledge required for working in sports informatics. This is certainly a comprehensive text that will be a valuable resource for many readers.


2018 ◽  
Author(s):  
Christian Damgaard

AbstractIn order to fit population ecological models, e.g. plant competition models, to new drone-aided image data, we need to develop statistical models that may take the new type of measurement uncertainty when applying machine-learning algorithms into account and quantify its importance for statistical inferences and ecological predictions. Here, it is proposed to quantify the uncertainty and bias of image predicted plant taxonomy and abundance in a hierarchical statistical model that is linked to ground-truth data obtained by the pin-point method. It is critical that the error rate in the species identification process is minimized when the image data are fitted to the population ecological models, and several avenues for reaching this objective are discussed. The outlined method to statistically model known sources of uncertainty when applying machine-learning algorithms may be relevant for other applied scientific disciplines.


SPE Journal ◽  
2020 ◽  
Vol 25 (05) ◽  
pp. 2778-2800 ◽  
Author(s):  
Harpreet Singh ◽  
Yongkoo Seol ◽  
Evgeniy M. Myshakin

Summary The application of specialized machine learning (ML) in petroleum engineering and geoscience is increasingly gaining attention in the development of rapid and efficient methods as a substitute to existing methods. Existing ML-based studies that use well logs contain two inherent limitations. The first limitation is that they start with one predefined combination of well logs that by default assumes that the chosen combination of well logs is poised to give the best outcome in terms of prediction, although the variation in accuracy obtained through different combinations of well logs can be substantial. The second limitation is that most studies apply unsupervised learning (UL) for classification problems, but it underperforms by a substantial margin compared with nearly all the supervised learning (SL) algorithms. In this context, this study investigates a variety of UL and SL ML algorithms applied on multiple well-log combinations (WLCs) to automate the traditional workflow of well-log processing and classification, including an optimization step to achieve the best output. The workflow begins by processing the measured well logs, which includes developing different combinations of measured well logs and their physics-motivated augmentations, followed by removal of potential outliers from the input WLCs. Reservoir lithology with four different rock types is investigated using eight UL and seven SL algorithms in two different case studies. The results from the two case studies are used to identify the optimal set of well logs and the ML algorithm that gives the best matching reservoir lithology to its ground truth. The workflow is demonstrated using two wells from two different reservoirs on Alaska North Slope to distinguish four different rock types along the well (brine-dominated sand, hydrate-dominated sand, shale, and others/mixed compositions). The results show that the automated workflow investigated in this study can discover the ground truth for the lithology with up to 80% accuracy with UL and up to 90% accuracy with SL, using six routine well logs [vp, vs, ρb, ϕneut, Rt, gamma ray (GR)], which is a significant improvement compared with the accuracy reported in the current state of the art, which is less than 70%.


2021 ◽  
Author(s):  
Oliver Lindhiem ◽  
Mayank Goel ◽  
Sam Shaaban ◽  
Kristie Mak ◽  
Prerna Chikersal ◽  
...  

UNSTRUCTURED Although hyperactivity is a core symptom of ADHD, there are no objective measures that are widely used in clinical settings. We describe the development of a smartwatch application to measure hyperactivity in school-age children. The LemurDx prototype is a software system for smartwatches that uses wearable sensor technology and machine learning (ML) to measure hyperactivity, with the goal of differentiating children with ADHD combined presentation or predominantly hyperactive/impulsive presentation from children with typical levels of activity. In this pilot study, we recruited 30 children (ages 6-11) to wear the smartwatch with the LemurDx app for two days. Parents also provided activity labels for 30-minute intervals to help train the algorithm. Half the sample had ADHD combined presentation or predominantly hyperactive/impulsive presentation (n = 15) and half were healthy controls (n = 15). Results indicated high usability scores and an overall diagnostic accuracy of .89 (sensitivity = .93; specificity = .86) when the motion sensor output was paired with the activity labels, suggesting that state-of-the-art sensors and ML may provide a promising avenue for the objective measurement of hyperactivity.


Sign in / Sign up

Export Citation Format

Share Document