scholarly journals Modeling Mine Workforce Fatigue: Finding Leading Indicators of Fatigue in Operational Data Sets

Minerals ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 621
Author(s):  
Elaheh Talebi ◽  
W. Pratt Rogers ◽  
Tyler Morgan ◽  
Frank A. Drews

Mine workers operate heavy equipment while experiencing varying psychological and physiological impacts caused by fatigue. These impacts vary in scope and severity across operators and unique mine operations. Previous studies show the impact of fatigue on individuals, raising substantial concerns about the safety of operation. Unfortunately, while data exist to illustrate the risks, the mechanisms and complex pattern of contributors to fatigue are not understood sufficiently, illustrating the need for new methods to model and manage the severity of fatigue’s impact on performance and safety. Modern technology and computational intelligence can provide tools to improve practitioners’ understanding of workforce fatigue. Many mines have invested in fatigue monitoring technology (PERCLOS, EEG caps, etc.) as a part of their health and safety control system. Unfortunately, these systems provide “lagging indicators” of fatigue and, in many instances, only provide fatigue alerts too late in the worker fatigue cycle. Thus, the following question arises: can other operational technology systems provide leading indicators that managers and front-line supervisors can use to help their operators to cope with fatigue levels? This paper explores common data sets available at most modern mines and how these operational data sets can be used to model fatigue. The available data sets include operational, health and safety, equipment health, fatigue monitoring and weather data. A machine learning (ML) algorithm is presented as a tool to process and model complex issues such as fatigue. Thus, ML is used in this study to identify potential leading indicators that can help management to make better decisions. Initial findings confirm existing knowledge tying fatigue to time of day and hours worked. These are the first generation of models and future models will be forthcoming.

2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Yajie Zou ◽  
Ting Zhu ◽  
Yifan Xie ◽  
Linbo Li ◽  
Ying Chen

Travel time reliability (TTR) is widely used to evaluate transportation system performance. Adverse weather condition is an important factor for affecting TTR, which can cause traffic congestions and crashes. Considering the traffic characteristics under different traffic conditions, it is necessary to explore the impact of adverse weather on TTR under different conditions. This study conducted an empirical travel time analysis using traffic data and weather data collected on Yanan corridor in Shanghai. The travel time distributions were analysed under different roadway types, weather, and time of day. Four typical scenarios (i.e., peak hours and off-peak hours on elevated expressway, peak hours and off-peak hours on arterial road) were considered in the TTR analysis. Four measures were calculated to evaluate the impact of adverse weather on TTR. The results indicated that the lognormal distribution is preferred for describing the travel time data. Compared with off-peak hours, the impact of adverse weather is more significant for peak hours. The travel time variability, buffer time index, misery index, and frequency of congestion increased by an average of 29%, 19%, 22%, and 63%, respectively, under the adverse weather condition. The findings in this study are useful for transportation management agencies to design traffic control strategies when adverse weather occurs.


2020 ◽  
Vol 12 (17) ◽  
pp. 6788 ◽  
Author(s):  
Eva Lucas Segarra ◽  
Germán Ramos Ruiz ◽  
Vicente Gutiérrez González ◽  
Antonis Peppas ◽  
Carlos Fernández Bandera

The use of building energy models (BEMs) is becoming increasingly widespread for assessing the suitability of energy strategies in building environments. The accuracy of the results depends not only on the fit of the energy model used, but also on the required external files, and the weather file is one of the most important. One of the sources for obtaining meteorological data for a certain period of time is through an on-site weather station; however, this is not always available due to the high costs and maintenance. This paper shows a methodology to analyze the impact on the simulation results when using an on-site weather station and the weather data calculated by a third-party provider with the purpose of studying if the data provided by the third-party can be used instead of the measured weather data. The methodology consists of three comparison analyses: weather data, energy demand, and indoor temperature. It is applied to four actual test sites located in three different locations. The energy study is analyzed at six different temporal resolutions in order to quantify how the variation in the energy demand increases as the time resolution decreases. The results showed differences up to 38% between annual and hourly time resolutions. Thanks to a sensitivity analysis, the influence of each weather parameter on the energy demand is studied, and which sensors are worth installing in an on-site weather station are determined. In these test sites, the wind speed and outdoor temperature were the most influential weather parameters.


2020 ◽  
Vol 11 (1) ◽  
pp. 150-170 ◽  
Author(s):  
Seyed Sajad Mousavi ◽  
Reza Khani Jazani ◽  
Elizabeth A. Cudney ◽  
Paolo Trucco

Purpose This study aims to quantify the multifaceted relationship between lean implementation and occupational health and safety (OHS) performance. Hypotheses based on a set of antecedents (mediating factors) are built and quantitatively tested. Design/methodology/approach Data were collected through an international survey with responses from more than 20 countries. Partial least square-based structural equation modeling was used to test a theoretical framework derived from literature. Leading indicators (formative indices) were used to evaluate the four antecedents of OHS performance (mediating factors). Findings All the identified antecedents show a significant mediating role. Antecedents related to the working environment and organizational factors have the strongest mediating effect. Results support the importance of using OHS leading indicators to appropriately measure the impact of lean implementation on workers’ health and safety. Research limitations/implications The proposed OHS leading indicators connecting lean practices to OHS performance antecedents are only explored in this study. Therefore, to establish a comprehensive, validated and practically usable set of leading indicators, further research is needed. Practical implications As there are some synergistic and trade-off relationships between lean and safety, the findings of this study will enable managers and organizations to leverage the positive effects of lean implementation on workers’ health and safety and mitigate the negative effects. Originality/value Several prior studies investigated the multifaceted link between lean and OHS; however, this is the first study that tested direct and mediated influence by defining a coherent set of antecedents. The results justify and strongly support the adoption of OHS leading indicators to measure the impact of lean implementation on OHS performance.


2021 ◽  
Vol 246 ◽  
pp. 04003
Author(s):  
Kristofersen, by Hans Smedsrud ◽  
Kai Xue ◽  
Zhirong Yang ◽  
Liv-Inger Stenstad ◽  
Tor Emil Giske ◽  
...  

The objective of this study is to evaluate and predict the energy use in different buildings during COVID-19 pandemic period at St. Olavs Hospital in Trondheim. Based on machine learning, operational data from St. Olavs hospital combined with weather data will be used to predict energy use for the hospital. Analysis of the energy data showed that the case buildings at the hospital did not have any different energy use during the pandemic this year compared to the same period last year, except for the lab center. The energy consumption of electricity, heating and cooling is very similar both in 2019 and 2020 for all buildings, but in 2020 during the pandemic, the lab center had a reduction of 35% in electricity, compared to last year. An analysis of the energy needed for heating and cooling in the end of June to the end of November was also calculated for operating room 1 and was estimated to 256 kWh/m2 for operation room 1. The machine learning algorithms perform very well to predict the energy consumption of case buildings, Random Forest and AdaBoost proves as the best models, with less than 10% margin of error, some of the models have only 4% error. An analysis of the effect of humidification of ventilation air on energy consumption in operating room 1 was also carried out. The impact on energy consumption were high in winter and will at the coldest periods be able to double the energy consumption needed in the ventilation.


Author(s):  
Osama Alsalous ◽  
Susan Hotle

Air traffic management efficiency in the descent phase of flights is a key area of interest in aviation research for the United States, Europe, and recently other parts of the world. The efficiency of arrival travel times within the terminal airspace is one of nineteen key performance indicators defined by the Federal Aviation Administration (FAA) and the International Civil Aviation Organization, typically within 100 nmi of arrival airports. This study models the relationship between travel time within the terminal airspace and contributing factors using a multivariate log-linear model to quantify the impact that these factors have on the total travel time within the last 100 nmi. The results were compared with the baseline set of variables that are currently used for benchmarking at the FAA. The analyzed data included flight and weather data from January 1, 2018 to March 31, 2018 for five airports in the United States: Chicago O’Hare International Airport, Hartsfield-Jackson Atlanta International, San Francisco International Airport, John F. Kennedy International Airport, and LaGuardia Airport. The modeling results showed that there is a significant improvement in prediction accuracy of travel times compared with the baseline methodology when additional factors, such as wind, meteorological conditions, demand and capacity, ground delay programs, market distance, time of day, and day of week, are included. Root mean squared error values from out-of-sample testing were used to measure the accuracy of the estimated models.


TAPPI Journal ◽  
2018 ◽  
Vol 17 (09) ◽  
pp. 519-532 ◽  
Author(s):  
Mark Crisp ◽  
Richard Riehle

Polyaminopolyamide-epichlorohydrin (PAE) resins are the predominant commercial products used to manufacture wet-strengthened paper products for grades requiring wet-strength permanence. Since their development in the late 1950s, the first generation (G1) resins have proven to be one of the most cost-effective technologies available to provide wet strength to paper. Throughout the past three decades, regulatory directives and sustainability initiatives from various organizations have driven the development of cleaner and safer PAE resins and paper products. Early efforts in this area focused on improving worker safety and reducing the impact of PAE resins on the environment. These efforts led to the development of resins containing significantly reduced levels of 1,3-dichloro-2-propanol (1,3-DCP) and 3-monochloropropane-1,2-diol (3-MCPD), potentially carcinogenic byproducts formed during the manufacturing process of PAE resins. As the levels of these byproducts decreased, the environmental, health, and safety (EH&S) profile of PAE resins and paper products improved. Recent initiatives from major retailers are focusing on product ingredient transparency and quality, thus encouraging the development of safer product formulations while maintaining performance. PAE resin research over the past 20 years has been directed toward regulatory requirements to improve consumer safety and minimize exposure to potentially carcinogenic materials found in various paper products. One of the best known regulatory requirements is the recommendations of the German Federal Institute for Risk Assessment (BfR), which defines the levels of 1,3-DCP and 3-MCPD that can be extracted by water from various food contact grades of paper. These criteria led to the development of third generation (G3) products that contain very low levels of 1,3-DCP (typically <10 parts per million in the as-received/delivered resin). This paper outlines the PAE resin chemical contributors to adsorbable organic halogens and 3-MCPD in paper and provides recommendations for the use of each PAE resin product generation (G1, G1.5, G2, G2.5, and G3).


2016 ◽  
Vol 3 (1) ◽  
Author(s):  
LAL SINGH ◽  
PARMEET SINGH ◽  
RAIHANA HABIB KANTH ◽  
PURUSHOTAM SINGH ◽  
SABIA AKHTER ◽  
...  

WOFOST version 7.1.3 is a computer model that simulates the growth and production of annual field crops. All the run options are operational through a graphical user interface named WOFOST Control Center version 1.8 (WCC). WCC facilitates selecting the production level, and input data sets on crop, soil, weather, crop calendar, hydrological field conditions, soil fertility parameters and the output options. The files with crop, soil and weather data are explained, as well as the run files and the output files. A general overview is given of the development and the applications of the model. Its underlying concepts are discussed briefly.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Yahya Albalawi ◽  
Jim Buckley ◽  
Nikola S. Nikolov

AbstractThis paper presents a comprehensive evaluation of data pre-processing and word embedding techniques in the context of Arabic document classification in the domain of health-related communication on social media. We evaluate 26 text pre-processings applied to Arabic tweets within the process of training a classifier to identify health-related tweets. For this task we use the (traditional) machine learning classifiers KNN, SVM, Multinomial NB and Logistic Regression. Furthermore, we report experimental results with the deep learning architectures BLSTM and CNN for the same text classification problem. Since word embeddings are more typically used as the input layer in deep networks, in the deep learning experiments we evaluate several state-of-the-art pre-trained word embeddings with the same text pre-processing applied. To achieve these goals, we use two data sets: one for both training and testing, and another for testing the generality of our models only. Our results point to the conclusion that only four out of the 26 pre-processings improve the classification accuracy significantly. For the first data set of Arabic tweets, we found that Mazajak CBOW pre-trained word embeddings as the input to a BLSTM deep network led to the most accurate classifier with F1 score of 89.7%. For the second data set, Mazajak Skip-Gram pre-trained word embeddings as the input to BLSTM led to the most accurate model with F1 score of 75.2% and accuracy of 90.7% compared to F1 score of 90.8% achieved by Mazajak CBOW for the same architecture but with lower accuracy of 70.89%. Our results also show that the performance of the best of the traditional classifier we trained is comparable to the deep learning methods on the first dataset, but significantly worse on the second dataset.


2021 ◽  
pp. 135581962110354
Author(s):  
Anthony W Gilbert ◽  
Emmanouil Mentzakis ◽  
Carl R May ◽  
Maria Stokes ◽  
Jeremy Jones

Objective Virtual Consultations may reduce the need for face-to-face outpatient appointments, thereby potentially reducing the cost and time involved in delivering health care. This study reports a discrete choice experiment (DCE) that identifies factors that influence patient preferences for virtual consultations in an orthopaedic rehabilitation setting. Methods Previous research from the CONNECT (Care in Orthopaedics, burdeN of treatmeNt and the Effect of Communication Technology) Project and best practice guidance informed the development of our DCE. An efficient fractional factorial design with 16 choice scenarios was created that identified all main effects and partial two-way interactions. The design was divided into two blocks of eight scenarios each, to reduce the impact of cognitive fatigue. Data analysis were conducted using binary logit regression models. Results Sixty-one paired response sets (122 subjects) were available for analysis. DCE factors (whether the therapist is known to the patient, duration of appointment, time of day) and demographic factors (patient qualifications, access to equipment, difficulty with activities, multiple health issues, travel costs) were significant predictors of preference. We estimate that a patient is less than 1% likely to prefer a virtual consultation if the patient has a degree, is without access to the equipment and software to undertake a virtual consultation, does not have difficulties with day-to-day activities, is undergoing rehabilitation for one problem area, has to pay less than £5 to travel, is having a consultation with a therapist not known to them, in 1 weeks’ time, lasting 60 minutes, at 2 pm. We have developed a simple conceptual model to explain how these factors interact to inform preference, including patients’ access to resources, context for the consultation and the requirements of the consultation. Conclusions This conceptual model provides the framework to focus attention towards factors that might influence patient preference for virtual consultations. Our model can inform the development of future technologies, trials, and qualitative work to further explore the mechanisms that influence preference.


2021 ◽  
pp. 000276422110216
Author(s):  
Kazimierz M. Slomczynski ◽  
Irina Tomescu-Dubrow ◽  
Ilona Wysmulek

This article proposes a new approach to analyze protest participation measured in surveys of uneven quality. Because single international survey projects cover only a fraction of the world’s nations in specific periods, researchers increasingly turn to ex-post harmonization of different survey data sets not a priori designed as comparable. However, very few scholars systematically examine the impact of the survey data quality on substantive results. We argue that the variation in source data, especially deviations from standards of survey documentation, data processing, and computer files—proposed by methodologists of Total Survey Error, Survey Quality Monitoring, and Fitness for Intended Use—is important for analyzing protest behavior. In particular, we apply the Survey Data Recycling framework to investigate the extent to which indicators of attending demonstrations and signing petitions in 1,184 national survey projects are associated with measures of data quality, controlling for variability in the questionnaire items. We demonstrate that the null hypothesis of no impact of measures of survey quality on indicators of protest participation must be rejected. Measures of survey documentation, data processing, and computer records, taken together, explain over 5% of the intersurvey variance in the proportions of the populations attending demonstrations or signing petitions.


Sign in / Sign up

Export Citation Format

Share Document