scholarly journals Enhanced PEMS Performance and Regulatory Compliance through Machine Learning

2018 ◽  
Vol 3 (4) ◽  
pp. 329
Author(s):  
Gregorio Ciarlo ◽  
Daniele Angelosante ◽  
Marco Guerriero ◽  
Giorgio Saldarini ◽  
Nunzio Bonavita

<p><em>Modeling technologies</em><em> can pro</em><em>vide strong support to existing emission management systems, by means of what is known as a Predictive Emission Monitoring System (PEMS). These systems do not measure emissions through any hardware device, but use computer models to predict emission concentrations on the ground of process data (e.g., fuel flow, load) and ambient parameters (e.g., air temperature, relative humidity). They actually represent a relevant application arena for the so-called Inferential Sensor technology which has quickly proved to be invaluable in modern process automation and optimization strategies (Qin et al., 1997; Kadlec et al., 2009). While lots of applications demonstrate that software systems provide accuracy comparable to that of hardware-based Continuous Emission Monitoring Systems (CEMS), virtual analyzers are able to offer additional features and capabilities which are often not properly considered by end-users. Depending on local regulations and constraints, PEMS can be exploited either as primary source of emission monitoring or as a back-up of hardware-based CEMS able to validate analyzers’ readings and extend their service factor. PEMS consistency (and therefore its acceptance from environmental authorities) is directly linked to the accuracy and reliability of each parameter used as input of the models. While environmental authorities are steadily opening to PEMS, it is easy to foresee that major recognition and acceptance will be driven by extending PEMS robustness in front of possible sensor failures. Providing reliable instrument fail-over procedures is the main objective of Sensor Validation (SV) strategies. In this work, the capabilities of a class of machine learning algorithms will be presented, showing the results based on tests performed actual field data gathered at a fluid catalytic cracking unit.</em></p>

Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1029 ◽  
Author(s):  
Thomas Kundinger ◽  
Nikoletta Sofra ◽  
Andreas Riener

Drowsy driving imposes a high safety risk. Current systems often use driving behavior parameters for driver drowsiness detection. The continuous driving automation reduces the availability of these parameters, therefore reducing the scope of such methods. Especially, techniques that include physiological measurements seem to be a promising alternative. However, in a dynamic environment such as driving, only non- or minimal intrusive methods are accepted, and vibrations from the roadbed could lead to degraded sensor technology. This work contributes to driver drowsiness detection with a machine learning approach applied solely to physiological data collected from a non-intrusive retrofittable system in the form of a wrist-worn wearable sensor. To check accuracy and feasibility, results are compared with reference data from a medical-grade ECG device. A user study with 30 participants in a high-fidelity driving simulator was conducted. Several machine learning algorithms for binary classification were applied in user-dependent and independent tests. Results provide evidence that the non-intrusive setting achieves a similar accuracy as compared to the medical-grade device, and high accuracies (>92%) could be achieved, especially in a user-dependent scenario. The proposed approach offers new possibilities for human–machine interaction in a car and especially for driver state monitoring in the field of automated driving.


2016 ◽  
Vol 15 (1) ◽  
pp. 59-63
Author(s):  
Morgan Stuart

Abstract Sports informatics and computer science in sport are perhaps the most exciting and fast-moving disciplines across all of sports science. The tremendous parallel growth in digital technology, non-invasive sensor devices, computer vision and machine learning have empowered sports analytics in ways perhaps never seen before. This growth provides great challenges for new entrants and seasoned veterans of sports analytics alike. Keeping pace with new technological innovations requires a thorough and systematic understanding of many diverse topics from computer programming, to database design, machine learning algorithms and sensor technology. Nevertheless, as quickly as the state of the art technology changes, the foundation skills and knowledge about computer science in sport are lasting. Furthermore, resources for students and practitioners across this range of areas are scarce, and the new-release textbook Computer Science in Sport: Research and Practice edited by Professor Arnold Baca, provides much of the foundation knowledge required for working in sports informatics. This is certainly a comprehensive text that will be a valuable resource for many readers.


2020 ◽  
Vol 12 (21) ◽  
pp. 3511
Author(s):  
Roghieh Eskandari ◽  
Masoud Mahdianpari ◽  
Fariba Mohammadimanesh ◽  
Bahram Salehi ◽  
Brian Brisco ◽  
...  

Unmanned Aerial Vehicle (UAV) imaging systems have recently gained significant attention from researchers and practitioners as a cost-effective means for agro-environmental applications. In particular, machine learning algorithms have been applied to UAV-based remote sensing data for enhancing the UAV capabilities of various applications. This systematic review was performed on studies through a statistical meta-analysis of UAV applications along with machine learning algorithms in agro-environmental monitoring. For this purpose, a total number of 163 peer-reviewed articles published in 13 high-impact remote sensing journals over the past 20 years were reviewed focusing on several features, including study area, application, sensor type, platform type, and spatial resolution. The meta-analysis revealed that 62% and 38% of the studies applied regression and classification models, respectively. Visible sensor technology was the most frequently used sensor with the highest overall accuracy among classification articles. Regarding regression models, linear regression and random forest were the most frequently applied models in UAV remote sensing imagery processing. Finally, the results of this study confirm that applying machine learning approaches on UAV imagery produces fast and reliable results. Agriculture, forestry, and grassland mapping were found as the top three UAV applications in this review, in 42%, 22%, and 8% of the studies, respectively.


2021 ◽  
Vol 2 (2) ◽  
Author(s):  
Niclas Ståhl ◽  
Gunnar Mathiason ◽  
Dellainey Alcacoas

AbstractBasic oxygen steel making is a complex chemical and physical industrial process that reduces a mix of pig iron and recycled scrap into low-carbon steel. Good understanding of the process and the ability to predict how it will evolve requires long operator experience, but this can be augmented with process target prediction systems. Such systems may use machine learning to learn a model of the process based on a long process history, and have an advantage in that they can make use of vastly more process parameters than operators can comprehend. While it has become less of a challenge to build such prediction systems using machine learning algorithms, actual production implementations are rare. The hidden reasoning of complex prediction model and lack of transparency prevents operator trust, even for models that show high accuracy predictions. To express model behaviour and thereby increasing transparency we develop a reinforcement learning (RL) based agent approach, which task is to generate short polynomials that can explain the model of the process from what it has learnt from process data. The RL agent is rewarded on how well it generates polynomials that can predict the process from a smaller subset of the process parameters. Agent training is done with the REINFORCE algorithm, which enables the sampling of multiple concurrently plausible polynomials. Having multiple polynomials, process developers can evaluate several alternative and plausible explanations, as observed in the historic process data. The presented approach gives both a trained generative model and a set of polynomials that can explain the process. The performance of the polynomials is as good as or better than more complex and less interpretable models. Further, the relative simplicity of the resulting polynomials allows good generalisation to fit new instances of data. The best of the resulting polynomials in our evaluation achieves a better $$R^2$$ R 2 score on the test set in comparison to the other machine learning models evaluated.


10.2196/14904 ◽  
2019 ◽  
Vol 21 (12) ◽  
pp. e14904 ◽  
Author(s):  
Nabil Alshurafa ◽  
Annie Wen Lin ◽  
Fengqing Zhu ◽  
Roozbeh Ghaffari ◽  
Josiah Hester ◽  
...  

Background Conventional diet assessment approaches such as the 24-hour self-reported recall are burdensome, suffer from recall bias, and are inaccurate in estimating energy intake. Wearable sensor technology, coupled with advanced algorithms, is increasingly showing promise in its ability to capture behaviors that provide useful information for estimating calorie and macronutrient intake. Objective This paper aimed to summarize current technological approaches to monitoring energy intake on the basis of expert opinion from a workshop panel and to make recommendations to advance technology and algorithms to improve estimation of energy expenditure. Methods A 1-day invitational workshop sponsored by the National Science Foundation was held at Northwestern University. A total of 30 participants, including population health researchers, engineers, and intervention developers, from 6 universities and the National Institutes of Health participated in a panel discussing the state of evidence with regard to monitoring calorie intake and eating behaviors. Results Calorie monitoring using technological approaches can be characterized into 3 domains: (1) image-based sensing (eg, wearable and smartphone-based cameras combined with machine learning algorithms); (2) eating action unit (EAU) sensors (eg, to measure feeding gesture and chewing rate); and (3) biochemical measures (eg, serum and plasma metabolite concentrations). We discussed how each domain functions, provided examples of promising solutions, and highlighted potential challenges and opportunities in each domain. Image-based sensor research requires improved ground truth (context and known information about the foods), accurate food image segmentation and recognition algorithms, and reliable methods of estimating portion size. EAU-based domain research is limited by the understanding of when their systems (device and inference algorithm) succeed and fail, need for privacy-protecting methods of capturing ground truth, and uncertainty in food categorization. Although an exciting novel technology, the challenges of biochemical sensing range from a lack of adaptability to environmental effects (eg, temperature change) and mechanical impact, instability of wearable sensor performance over time, and single-use design. Conclusions Conventional approaches to calorie monitoring rely predominantly on self-reports. These approaches can gain contextual information from image-based and EAU-based domains that can map automatically captured food images to a food database and detect proxies that correlate with food volume and caloric intake. Although the continued development of advanced machine learning techniques will advance the accuracy of such wearables, biochemical sensing provides an electrochemical analysis of sweat using soft bioelectronics on human skin, enabling noninvasive measures of chemical compounds that provide insight into the digestive and endocrine systems. Future computing-based researchers should focus on reducing the burden of wearable sensors, aligning data across multiple devices, automating methods of data annotation, increasing rigor in studying system acceptability, increasing battery lifetime, and rigorously testing validity of the measure. Such research requires moving promising technological solutions from the controlled laboratory setting to the field.


2019 ◽  
Author(s):  
Nabil Alshurafa ◽  
Annie Wen Lin ◽  
Fengqing Zhu ◽  
Roozbeh Ghaffari ◽  
Josiah Hester ◽  
...  

BACKGROUND Conventional diet assessment approaches such as the 24-hour self-reported recall are burdensome, suffer from recall bias, and are inaccurate in estimating energy intake. Wearable sensor technology, coupled with advanced algorithms, is increasingly showing promise in its ability to capture behaviors that provide useful information for estimating calorie and macronutrient intake. OBJECTIVE This paper aimed to summarize current technological approaches to monitoring energy intake on the basis of expert opinion from a workshop panel and to make recommendations to advance technology and algorithms to improve estimation of energy expenditure. METHODS A 1-day invitational workshop sponsored by the National Science Foundation was held at Northwestern University. A total of 30 participants, including population health researchers, engineers, and intervention developers, from 6 universities and the National Institutes of Health participated in a panel discussing the state of evidence with regard to monitoring calorie intake and eating behaviors. RESULTS Calorie monitoring using technological approaches can be characterized into 3 domains: (1) image-based sensing (eg, wearable and smartphone-based cameras combined with machine learning algorithms); (2) eating action unit (EAU) sensors (eg, to measure feeding gesture and chewing rate); and (3) biochemical measures (eg, serum and plasma metabolite concentrations). We discussed how each domain functions, provided examples of promising solutions, and highlighted potential challenges and opportunities in each domain. Image-based sensor research requires improved ground truth (context and known information about the foods), accurate food image segmentation and recognition algorithms, and reliable methods of estimating portion size. EAU-based domain research is limited by the understanding of when their systems (device and inference algorithm) succeed and fail, need for privacy-protecting methods of capturing ground truth, and uncertainty in food categorization. Although an exciting novel technology, the challenges of biochemical sensing range from a lack of adaptability to environmental effects (eg, temperature change) and mechanical impact, instability of wearable sensor performance over time, and single-use design. CONCLUSIONS Conventional approaches to calorie monitoring rely predominantly on self-reports. These approaches can gain contextual information from image-based and EAU-based domains that can map automatically captured food images to a food database and detect proxies that correlate with food volume and caloric intake. Although the continued development of advanced machine learning techniques will advance the accuracy of such wearables, biochemical sensing provides an electrochemical analysis of sweat using soft bioelectronics on human skin, enabling noninvasive measures of chemical compounds that provide insight into the digestive and endocrine systems. Future computing-based researchers should focus on reducing the burden of wearable sensors, aligning data across multiple devices, automating methods of data annotation, increasing rigor in studying system acceptability, increasing battery lifetime, and rigorously testing validity of the measure. Such research requires moving promising technological solutions from the controlled laboratory setting to the field.


2016 ◽  
Author(s):  
Sohrab Saeb ◽  
Luca Lonini ◽  
Arun Jayaraman ◽  
David C. Mohr ◽  
Konrad P. Kording

AbstractThe availability of smartphone and wearable sensor technology is leading to a rapid accumulation of human subject data, and machine learning is emerging as a technique to map that data into clinical predictions. As machine learning algorithms are increasingly used to support clinical decision making, it is important to reliably quantify their prediction accuracy. Cross-validation is the standard approach for evaluating the accuracy of such algorithms; however, several cross-validations methods exist and only some of them are statistically meaningful. Here we compared two popular cross-validation methods: record-wise and subject-wise. Using both a publicly available dataset and a simulation, we found that record-wise cross-validation often massively overestimates the prediction accuracy of the algorithms. We also found that this erroneous method is used by almost half of the retrieved studies that used accelerometers, wearable sensors, or smartphones to predict clinical outcomes. As we move towards an era of machine learning based diagnosis and treatment, using proper methods to evaluate their accuracy is crucial, as erroneous results can mislead both clinicians and data scientists.


2018 ◽  
Vol 18 (4) ◽  
pp. 60-72 ◽  
Author(s):  
Tobias MUELLER ◽  
Jonathan GREIPEL ◽  
Tobias WEBER ◽  
Robert H. SCHMITT

To detect root causes of non-conforming parts - parts outside the tolerance limits - in production processes a high level of expert knowledge is necessary. This results in high costs and a low flexibility in the choice of personnel to perform analyses. In modern production a vast amount of process data is available and machine learning algorithms exist which model processes empirically. Aim of this paper is to introduce a procedure for an automated root cause analysis based on machine learning algorithms to reduce the costs and the necessary expert knowledge. Therefore, a decision tree algorithm is chosen. A procedure for its application in an automated root cause analysis is presented and simulations to prove its applicability are conducted. In this paper influences affecting the success of detection are identified and simulated e.g. the necessary amount of data dependent on the amount of variables, the ratio between categories of non-conformities and OK parts as well as detectable root causes. The simulations are based on a regression model to determine the roughness of drilling holes. They prove the applicability of machine learning algorithms for an automated root cause analysis and indicate which influences have to be considered in real scenarios.


Sign in / Sign up

Export Citation Format

Share Document