An Evaluation of Sleepiness, Performance, and Workload Among Operators During a Real-Time Reactive Telerobotic Lunar Mission Simulation

Author(s):  
Zachary Glaros ◽  
Robert E. Carvalho ◽  
Erin E. Flynn-Evans

Objective We assessed operator performance during a real-time reactive telerobotic lunar mission simulation to understand how daytime versus nighttime operations might affect sleepiness, performance, and workload. Background Control center operations present factors that can influence sleepiness, neurobehavioral performance, and workload. Each spaceflight mission poses unique challenges that make it difficult to predict how long operators can safely and accurately conduct operations. We aimed to evaluate the performance impact of time-on-task and time-of-day using a simulated telerobotic lunar rover to better inform staffing and scheduling needs for the upcoming Volatiles Investigating Polar Exploration Rover (VIPER) mission. Methods We studied seven trained operators in a simulated mission control environment. Operators completed two five-hour simulations in a randomized order, beginning at noon and midnight. Performance was evaluated every 25 minutes using the Karolinska Sleepiness Scale, Psychomotor Vigilance Task, and NASA Task Load Index. Results Participants rated themselves as sleepier (5.06 ± 2.28) on the midnight compared to the noon simulation (3.12 ± 1.44; p < .001). Reaction time worsened over time during the midnight simulation but did not vary between simulations. Workload was rated higher during the noon (37.93 ± 20.09) compared to the midnight simulation (32.09 ± 21.74; p = .007). Conclusion Our findings suggest that work shifts during future operations should be limited in duration to minimize sleepiness. Our findings also suggest that working during the day, when distractions are present, increases perceived workload. Further research is needed to understand how working consecutive shifts and taking breaks within a shift influence performance.

Author(s):  
Adam Caspari ◽  
Brian Levine ◽  
Jeffrey Hanft ◽  
Alla Reddy

Amid significant increases in ridership (9.8% over the past 5 years) on the more than 100 year-old New York City Transit (NYCT) subway system, NYCT has become aware of increased crowding on station platforms. Because of limited platform capacity, platforms become crowded even during minor service disruptions. A real-time model was developed to estimate crowding conditions and to predict crowding for 15 min into the future. The algorithm combined historical automated fare collection data on passenger entry used to forecast station entrance, automated fare collection origin–destination inference information used to assign incoming passengers to a particular direction and line by time of day, and general transit feed specification–real time data to determine predicted train arrival times used to assign passengers on the platform to an incoming train. This model was piloted at the Wall Street Station on the No. 2 and No. 3 Lines in New York City’s Financial District, which serves an average 28,000 weekday riders, and validated with extensive field checks. A dashboard was developed to display this information graphically and visually in real time. On the basis of predictions of gaps in service and, consequently, high levels of crowding, dispatchers at NYCT’s Rail Control Center can alter service by holding a train or skipping several stops to alleviate any crowding conditions and provide safe and reliable service in these situations.


2021 ◽  
Vol 2 (Supplement_1) ◽  
pp. A50-A50
Author(s):  
I Marando ◽  
R Matthews ◽  
L Grosser ◽  
C Yates ◽  
S Banks

Abstract Sustained operations expose individuals to long work periods, which deteriorates their ability to sustain attention. Biological factors, including sleep deprivation and time of day, have been shown to play a critical role in the ability to sustain attention. However, a gap in the literature exists regarding external factors, such as workload. Therefore, the aim of this study was to investigate the combined effect of sleep deprivation, time of day, and workload on sustained attention. Twenty-one participants (18–34y, 10 F) were exposed to 62 hours of sleep deprivation within a controlled laboratory environment. Every 8 hours, sustained attention was measured using a 30-minute monotonous driving task, and subjective workload was measured using the NASA-Task Load Index (TLX). Workload, defined as time on task was assessed by splitting the drive into two 15-minute loops. A mixed model ANOVA revealed significant main effects of day (sleep deprivation) and time of day on lane deviation, number of crashes, speed deviation and time outside the safe zone (all p&lt;.001). There was a significant main effect of workload (time on task) on lane deviation (p=.042), indicating that a longer time on task resulted in greater lane deviation. NASA-TLX scores significantly increased with sleep deprivation (p&lt;.001), indicating that subjective workload increased with sleep loss even though the task remained constant. Workload, sleep deprivation and time of day produced a deterioration in sustained attention. With this, countermeasures that not only consider sleep deprivation and time of day, but also workload (time on task) can be considered.


Author(s):  
V. Нolovan ◽  
V. Gerasimov ◽  
А. Нolovan ◽  
N. Maslich

Fighting in the Donbas, which has been going on for more than five years, shows that a skillful counter-battery fight is an important factor in achieving success in wars of this kind. Especially in conditions where for the known reasons the use of combat aviation is minimized. With the development of technical warfare, the task of servicing the counter-battery fight began to rely on radar stations (radar) to reconnaissance the positions of artillery, which in modern terms are called counter-battery radar. The principle of counter-battery radar is based on the detection of a target (artillery shell, mortar mine or rocket) in flight at an earlier stage and making several measurements of the coordinates of the current position of the ammunition. According to these data, the trajectory of the projectile's flight is calculated and, on the basis of its prolongation and extrapolation of measurements, the probable coordinates of the artillery, as well as the places of ammunition falling, are determined. In addition, the technical capabilities of radars of this class allow you to recognize the types and caliber of artillery systems, as well as to adjust the fire of your artillery. The main advantages of these radars are:  mobility (transportability);  inspection of large tracts of terrain over long distances;  the ability to obtain target's data in near real-time;  independence from time of day and weather conditions;  relatively high fighting efficiency. The purpose of the article is to determine the leading role and place of the counter-battery radar among other artillery instrumental reconnaissance tools, to compare the combat capabilities of modern counter-battery radars, armed with Ukrainian troops and some leading countries (USA, China, Russia), and are being developed and tested in Ukraine. The method of achieving this goal is a comparative analysis of the features of construction and combat capabilities of modern models of counter-battery radar in Ukraine and in other countries. As a result of the conducted analysis, the directions of further improvement of the radar armament, increasing the capabilities of existing and promising counter-battery radar samples were determined.


Author(s):  
Maryam Maghsoudipour ◽  
Ramin Moradi ◽  
Sara Moghimi ◽  
Sonia Ancoli-Israel ◽  
Pamela N. DeYoung ◽  
...  

2018 ◽  
Vol 10 (1) ◽  
Author(s):  
Alexandra Swirski ◽  
Dr. David Pearl ◽  
Dr. Olaf Berke ◽  
Terri O'Sullivan ◽  
Deborah Stacey

Objective: Our objective was to assess the suitability of the data collected by the Animal Poison Control Center, run by the American Society for the Prevention of Cruelty to Animals, for the surveillance of toxicological exposures in companion animals in the United States.Introduction: There have been a number of non-infectious intoxication outbreaks reported in North American companion animal populations over the last decade1. The most devastating outbreak to date was the 2007 melamine pet food contamination incident which affected thousands of pet dogs and cats across North America1. Despite these events, there have been limited efforts to conduct real-time surveillance of toxicological exposures in companion animals nationally, and there is no central registry for the reporting of toxicological events in companion animals in the United States. However, there are a number of poison control centers in the US that collect extensive data on toxicological exposures in companion animals, one of which is the Animal Poison Control Center (APCC) operated by the American Society for the Prevention of Cruelty to Animals (ASPCA). Each year the APCC receives thousands of reports of suspected animal poisonings and collects extensive information from each case, including location of caller, exposure history, diagnostic findings, and outcome. The records from each case are subsequently entered and stored in the AnTox database, an electronic medical record database maintained by the APCC. Therefore, the AnTox database represents a novel source of data for real-time surveillance of toxicological events in companion animals, and may be used for surveillance of pet food and environmental contamination events that may negatively impact both veterinary and human health.Methods: Recorded data from calls to the APPC were collected from the AnTox database from January 1, 2005 to December 31, 2014, inclusive. Sociodemographic data were extracted from the American 2010 decennial census and the American Community Surveys. Choropleth maps were used for preliminary analyses to examine the distribution of reporting to the hotline at the county-level and identify any “holes” in surveillance. To further identify if gaps in reporting were randomly distributed or tended to occur in clusters, as well as to look for any predictable spatial clusters of high rates of reporting, spatial scan statistics, based on a Poisson model, were employed. We fitted multilevel logistic regression models, to account for clustering within county and state, to identify factors (e.g., season, human demographic factors) that are related to predictable changes in call volume or reporting, which may bias the results of quantitative methods for aberration/outbreak detection.Results: Throughout the study period, over 40% of counties reported at least one call to the hotline each year, with the majority of calls coming from the Northeast. Conversely, there was a large “hole” in coverage in Midwestern and southeastern states. The location of the most likely high and low call rate clusters were relatively stable throughout the study period and were associated with socioeconomic status (SES), as the most likely high risk clusters were identified in areas of high SES. Similar results were identified using multivariable analysis as indicators of high SES were found to be positively associated with rates of calls to the hotline at the county-level.Conclusions: Socioeconomic status is a major factor impacting the reporting of toxicological events to the APCC, and needs to be accounted for when applying cluster detection methods to identify outbreaks of mass poisoning events. Large spatial gaps in the network of potential callers to the center also need to be recognized when interpreting the spatiotemporal results of analyses involving these data, particularly when statistical methods that are highly influenced by edge effects are used.


2020 ◽  
Author(s):  
Francisco D. S. Melo ◽  
Antonio S. Lima ◽  
Karen C. O. Salim ◽  
Fernando R. Lage

This article presents the design, algorithms, and results obtained situations not foreseen by the operating procedures, through the use of a real-time assessment analysis tool to perform change of generation of thermoelectric plants in a configuration denominated as an altered grid, which may impact the excellent performance of Systemic Special Protection Schemes implemented by power system operation planning. Thus, through the analysis of the security region, control center operators may have the adequate allowance, in real-time, to perform a new and precise generation request and thus avoid instabilities as well as overload in the electric system under analysis, especially prolonged interruptions of electricity for consumers.


Author(s):  
Joshua B. Hurwitz

Increased real-time risk-taking under sleep loss could be marked by changes in risk perception or acceptance. Risk-perception processes are those involved in estimating real-time parameters such as the speeds and distances of hazardous objects. Risk-acceptance processes relate to response choices given risk estimates. Risk-taking under fatigue was studied using a simulated intersection-crossing driving task in which subjects decided when it was safe to cross an intersection as an oncoming car approached from the cross street. The subjects performed this task at 3-hour intervals over a 36-hour period without sleep. Results were modeled using a model of real-time risky decision making that has perceptual components that process speed, time and distance information, and a decisional component for accepting risk. Results showed that varying a parameter for the decisional component across sessions best accounted for variations in performance relating to time of day.


Stroke ◽  
2017 ◽  
Vol 48 (suppl_1) ◽  
Author(s):  
Benjamin Y Andrew ◽  
Colleen M Stack ◽  
Julian P Yang ◽  
Jodi A Dodds

Introduction: The use of mobile electronic care coordination via smartphone technology is a novel approach aimed at increasing efficiency in acute stroke care. One such platform, StopStroke© (Pulsara Inc., Bozeman, MT), serves to coordinate personnel (EMS, nurses, physicians) during stroke codes with real-time digital alerts. This study was designed to examine post-implementation data from multiple medical centers utilizing the StopStroke© application, and to evaluate the effect of method of arrival to ED and time of presentation on these results. Methods: A retrospective analysis of all acute stroke codes using StopStroke© from 3/2013 – 5/2016 at 12 medical centers was performed. Preliminary unadjusted comparison of clinical metrics (door-to-needle time [DTN], door-to-CT time [DTC], and rate of goal DTN) was performed between subgroups based on both method of arrival (EMS vs. other arrival to ED) and time of day. Effects were then adjusted for confounding variables (age, sex, NIHSS score) in multiple linear and logistic regression models. Results: The final dataset included 2589 unique cases. Patients arriving by EMS were older (median age 67 vs. 64, P < 0.0001), had more severe strokes (median NIHSS score 8 vs. 4, P < 0.0001), and were more likely to receive tPA (20% vs. 12%, P < 0.0001) than those arriving to ED via alternative method. After adjustment for age, sex, NIHSS score and case time, patients arriving via EMS had shorter DTC (6.1 min shorter, 95% CI [2, 10.3]) and DTN (12.8 min shorter, 95% CI [4.6, 21]) and were more likely to meet goal DTN (OR 1.83, 95% CI [1.1, 3]). Adjusted analysis also showed longer DTC (7.7 min longer, 95% CI [2.4, 13]) and DTN (21.1 min longer, 95% CI [9.3, 33]), and reduced rate of goal DTN (OR 0.3, 95% CI [0.15, 0.61]) in cases occurring from 1200-1800 when compared to those occurring from 0000-0600. Conclusions: By incorporating real-time pre-hospital data obtained via smartphone technology, this analysis provides unique insight into acute stroke codes. Additionally, mobile electronic stroke care coordination is a promising method for more efficient and efficacious acute stroke care. Furthermore, early activation of a mobile coordination platform in the field appears to promote a more expedited and successful care process.


2021 ◽  
Author(s):  
Jacqueline Louise Mair ◽  
Lawrence Hayes ◽  
Amy Campbell ◽  
Duncan Buchan ◽  
Chris Easton ◽  
...  

BACKGROUND Just-in-time-adaptive-interventions (JITAIs) provide real-time ‘in the moment’ behaviour change support to people when they need it most. JITAIs could be a viable way to provide personalised physical activity support to older adults in the community. However, it is unclear how feasible it is to remotely deliver a physical activity intervention via a smartphone to older adults, or how acceptable older adults would find a JITAI targeting physical activity in everyday life. OBJECTIVE (1) to describe the development of “JITABug”, a personalised smartphone and activity tracker delivered JITAI designed to support older adults to increase or maintain their physical activity level; (2) to explore the acceptability of JITABug in a free-living setting, and (3) to assess the feasibility of conducting an effectiveness trial of the JITABug intervention. METHODS The intervention development process was underpinned by the Behaviour Change Wheel. The intervention consisted of a wearable activity tracker (Fitbit) and a companion smartphone app (JITABug) which delivered goal setting, planning, reminders, and just-in-time adaptive messages to encourage achievement of personalised physical activity goals. Message delivery was tailored based on time of day, real-time physical activity tracker data, and weather conditions. We tested the feasibility of remotely delivering the JITAI with older adults in a 6-week trial using a mixed-methods approach. Data collection involved assessment of physical activity by accelerometery and activity tracker, self-reported mood and mental wellbeing via ecological momentary assessment, and contextual information on physical activity via voice memos. Feasibility and acceptability outcomes included: (1) recruitment capability and adherence to the intervention; (2) intervention delivery ‘in the wild’; (3) appropriateness of data collection methodology; (4) adverse events and; (5) participant satisfaction. RESULTS Of 46 recruited older adults (aged 56-72 years old), 65% completed the intervention. The intervention was successfully delivered as intended; 27 participants completed the intervention independently, 94% of physical activity messages were successfully delivered, and 99% of Fitbit and 100% of weather data calls were successful. Wrist-worn accelerometer data were obtained from 96% at baseline and 96% at follow up. On average, participants recorded 8/16 (50%) voice memos, 3/8 (38%) mood assessments, and 2/4 (50%) wellbeing assessments via the app. Overall acceptability of the intervention was very good (77% satisfaction). Participant feedback suggested that more diverse and tailored physical activity messages, app usage reminders, technical refinements regarding real-time data syncing, and an improved user interface could improve the intervention and make it more appealing. CONCLUSIONS This study suggests that a smartphone delivered JITAI utilizing a wearable activity tracker is an acceptable way to support physical activity in older adults in the community. Overall, the intervention is feasible, however based on user feedback, the JITABug app requires further technical refinements that may enhance usage, engagement, and user satisfaction before moving to effectiveness trials. CLINICALTRIAL Non-Applicable


Sign in / Sign up

Export Citation Format

Share Document