Assessing the Effect of Pedestrians’ Use of Cell Phones on Their Walking Behavior: A Study Based on Automated Video Analysis

Author(s):  
Rushdi Alsaleh ◽  
Tarek Sayed ◽  
Mohamed H. Zaki

The objective of the study is to assess the effect of the use of cell phones while walking at urban crosswalks. The methodology uses recent findings in health science concerning the relationship between tempo-spatial characteristics of gait and the cognitive abilities of pedestrians. Gait measures are shown to be affected by the complexity of the task (e.g., talking and texting) performed during walking. This study focuses on the effect of distraction states, distraction types (visual such as texting/reading and auditory such as talking/listening), and pedestrian-vehicle interactions on the gait parameters of pedestrians at crosswalks. Experiments are performed on a video data set near a college campus in the city of Kamloops, British Columbia. The analysis relies on automated video-based data collection using a computer vision technique. The benefits of such an automated system include the ability to capture the natural movement of pedestrians and minimizing the risk of disturbing their behavior. Results show that pedestrians distracted by texting/reading (visually) or talking/listening (auditory) while walking tend to reduce and control their walking speed by adjusting their step length or step frequency, respectively. Pedestrians distracted by texting/reading (visually) have significantly lower step length and are less stable in walking. Distracted pedestrians involved in interactions with approaching vehicles tend to reduce and control their walking speeds by adjusting their step frequencies. This research can find applications in pedestrian facility design, modeling and calibrating pedestrian simulations, and pedestrian safety intervention programs and legislative actions.

2019 ◽  
Vol 3 (s1) ◽  
pp. 26-26
Author(s):  
Jacqueline E. Westerdahl ◽  
Victoria Moerchen

OBJECTIVES/SPECIFIC AIMS: This research examined 3 aims to address the need to understand and quantify exertion in infants. Aim 1: Develop a schema to identify and code exertional behaviors in infants during treadmill stepping. Aim 2: Establish feasibility for the schema’s use with clinical populations. Aim 3: Pilot the schema in a study designed to induce infant exertion. METHODS/STUDY POPULATION: Aims 1 and 2 were achieved using existing treadmill stepping data. The data used in Aim 1 included eight typically-developing infants (age 7-10 months) who were able to sit independently, but not walk. The data used in Aim 2 came from two separate data sets from infants who took more than 10 steps in a 30-second trial: Data set A included six typically-developing infants (age 2-5 months) who were unable to sit independently (developmentally comparable to atypical populations who might receive treadmill interventions). Data set B included six infants with Spina Bifida (age 3-10 months). Aim 3 was addressed with a prospective study using an exertion model. Pre-walking, typically developing infants (age 8-10 months) underwent five total stepping trials. Trial 1 determined the infant’s individualized maximum stepping speed; trials 2-5 were each 60 seconds and alternated between a baseline stepping speed of.20 m/s and the infant’s maximum stepping speed determined in trial 1. All video data were coded for step type, step frequency, and exertional behavior. RESULTS/ANTICIPATED RESULTS: Aim 1: Two behaviors were identified and determined to capture infant exertion: foot dragging and leg crossing. Aim 2: The feasibility of capturing exertion with these two behaviors was established for young infants and infants with neuromotor delays, with exertional behaviors increasing with stepping exposure (p< 0.05). Aim 3: Total exertion (foot dragging + leg crossing) was higher in the maximum speed trials compared to baseline trials (p = 0.005). DISCUSSION/SIGNIFICANCE OF IMPACT: Exertion in infants can be quantified. The exertion schema developed with this study will support the development of dosing guidelines for infant treadmill intervention. The next step in this line of research is to examine the correlation between infant exertion and heart rate, in effort to move from behaviorally-informed protocols to more precise, individualized protocols based on the physiological response of the infant.


Cancers ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2576
Author(s):  
Vincent Chin-Hung Chen ◽  
Chin-Kuo Lin ◽  
Han-Pin Hsiao ◽  
Bor-Show Tzang ◽  
Yen-Hsuan Hsu ◽  
...  

Background: We aimed to investigate the associations of breast cancer (BC) and cancer-related chemotherapies with cytokine levels, and cognitive function. Methods: We evaluated subjective and objective cognitive function in BC patients before chemotherapy and 3~9 months after the completion of chemotherapy. Healthy volunteers without cancer were also compared as control group. Interleukins (IL) 2, 4, 5, 6, 10, 12p70, 13, 17A, 1β, IFNγ, and TNFα were measured. Associations of cancer status, chemotherapy and cytokine levels with subjective and objective cognitive impairments were analyzed using a regression model, adjusting for covariates, including IQ and psychological distress. Results: After adjustment, poorer performance in semantic verbal fluency was found in the post-chemotherapy subgroup compared to controls (p = 0.011, η2 = 0.070); whereas pre-chemotherapy patients scored higher in subjective cognitive perception. Higher IL-13 was associated with lower semantic verbal fluency in the post-chemotherapy subgroup. Higher IL-10 was associated with better perceived cognitive abilities in the pre-chemotherapy and control groups; while IL-5 and IL-13 were associated with lower perceived cognitive abilities in pre-chemotherapy and control groups. Our findings from mediation analysis further suggest that verbal fluency might be affected by cancer status, although mediated by anxiety. Conclusions: Our findings suggest that verbal fluency might be affected by cancer status, although mediated by anxiety. Different cytokines and their interactions may have different roles of neuroinflammation or neuroprotection that need further research.


Author(s):  
Manfred Ehresmann ◽  
Georg Herdrich ◽  
Stefanos Fasoulas

AbstractIn this paper, a generic full-system estimation software tool is introduced and applied to a data set of actual flight missions to derive a heuristic for system composition for mass and power ratios of considered sub-systems. The capability of evolutionary algorithms to analyse and effectively design spacecraft (sub-)systems is shown. After deriving top-level estimates for each spacecraft sub-system based on heuristic heritage data, a detailed component-based system analysis follows. Various degrees of freedom exist for a hardware-based sub-system design; these are to be resolved via an evolutionary algorithm to determine an optimal system configuration. A propulsion system implementation for a small satellite test case will serve as a reference example of the implemented algorithm application. The propulsion system includes thruster, power processing unit, tank, propellant and general power supply system masses and power consumptions. Relevant performance parameters such as desired thrust, effective exhaust velocity, utilised propellant, and the propulsion type are considered as degrees of freedom. An evolutionary algorithm is applied to the propulsion system scaling model to demonstrate that such evolutionary algorithms are capable of bypassing complex multidimensional design optimisation problems. An evolutionary algorithm is an algorithm that uses a heuristic to change input parameters and a defined selection criterion (e.g., mass fraction of the system) on an optimisation function to refine solutions successively. With sufficient generations and, thereby, iterations of design points, local optima are determined. Using mitigation methods and a sufficient number of seed points, a global optimal system configurations can be found.


2021 ◽  
Vol 14 ◽  
pp. 263177452199062
Author(s):  
Benjamin Gutierrez Becker ◽  
Filippo Arcadu ◽  
Andreas Thalhammer ◽  
Citlalli Gamez Serna ◽  
Owen Feehan ◽  
...  

Introduction: The Mayo Clinic Endoscopic Subscore is a commonly used grading system to assess the severity of ulcerative colitis. Correctly grading colonoscopies using the Mayo Clinic Endoscopic Subscore is a challenging task, with suboptimal rates of interrater and intrarater variability observed even among experienced and sufficiently trained experts. In recent years, several machine learning algorithms have been proposed in an effort to improve the standardization and reproducibility of Mayo Clinic Endoscopic Subscore grading. Methods: Here we propose an end-to-end fully automated system based on deep learning to predict a binary version of the Mayo Clinic Endoscopic Subscore directly from raw colonoscopy videos. Differently from previous studies, the proposed method mimics the assessment done in practice by a gastroenterologist, that is, traversing the whole colonoscopy video, identifying visually informative regions and computing an overall Mayo Clinic Endoscopic Subscore. The proposed deep learning–based system has been trained and deployed on raw colonoscopies using Mayo Clinic Endoscopic Subscore ground truth provided only at the colon section level, without manually selecting frames driving the severity scoring of ulcerative colitis. Results and Conclusion: Our evaluation on 1672 endoscopic videos obtained from a multisite data set obtained from the etrolizumab Phase II Eucalyptus and Phase III Hickory and Laurel clinical trials, show that our proposed methodology can grade endoscopic videos with a high degree of accuracy and robustness (Area Under the Receiver Operating Characteristic Curve = 0.84 for Mayo Clinic Endoscopic Subscore ⩾ 1, 0.85 for Mayo Clinic Endoscopic Subscore ⩾ 2 and 0.85 for Mayo Clinic Endoscopic Subscore ⩾ 3) and reduced amounts of manual annotation. Plain language summary Patient, caregiver and provider thoughts on educational materials about prescribing and medication safety Artificial intelligence can be used to automatically assess full endoscopic videos and estimate the severity of ulcerative colitis. In this work, we present an artificial intelligence algorithm for the automatic grading of ulcerative colitis in full endoscopic videos. Our artificial intelligence models were trained and evaluated on a large and diverse set of colonoscopy videos obtained from concluded clinical trials. We demonstrate not only that artificial intelligence is able to accurately grade full endoscopic videos, but also that using diverse data sets obtained from multiple sites is critical to train robust AI models that could potentially be deployed on real-world data.


2021 ◽  
pp. 154596832110010
Author(s):  
Margaret A. French ◽  
Matthew L. Cohen ◽  
Ryan T. Pohlig ◽  
Darcy S. Reisman

Background There is significant variability in poststroke locomotor learning that is poorly understood and affects individual responses to rehabilitation interventions. Cognitive abilities relate to upper extremity motor learning in neurologically intact adults, but have not been studied in poststroke locomotor learning. Objective To understand the relationship between locomotor learning and retention and cognition after stroke. Methods Participants with chronic (>6 months) stroke participated in 3 testing sessions. During the first session, participants walked on a treadmill and learned a new walking pattern through visual feedback about their step length. During the second session, participants walked on a treadmill and 24-hour retention was assessed. Physical and cognitive tests, including the Fugl-Meyer-Lower Extremity (FM-LE), Fluid Cognition Composite Score (FCCS) from the NIH Toolbox -Cognition Battery, and Spatial Addition from the Wechsler Memory Scale-IV, were completed in the third session. Two sequential regression models were completed: one with learning and one with retention as the dependent variables. Age, physical impairment (ie, FM-LE), and cognitive measures (ie, FCCS and Spatial Addition) were the independent variables. Results Forty-nine and 34 participants were included in the learning and retention models, respectively. After accounting for age and FM-LE, cognitive measures explained a significant portion of variability in learning ( R2 = 0.17, P = .008; overall model R2 = 0.31, P = .002) and retention (Δ R2 = 0.17, P = .023; overall model R2 = 0.44, P = .002). Conclusions Cognitive abilities appear to be an important factor for understanding locomotor learning and retention after stroke. This has significant implications for incorporating locomotor learning principles into the development of personalized rehabilitation interventions after stroke.


2003 ◽  
Vol 47 (10) ◽  
pp. 175-181 ◽  
Author(s):  
G. Buitrón ◽  
M.-E. Schoeb ◽  
J. Moreno

The operation of a sequencing batch bioreactor is evaluated when high concentration peaks of a toxic compound (4-chlorophenol, 4CP) are introduced into the reactor. A control strategy based on the dissolved oxygen concentration, measured on line, is utilized. To detect the end of the reaction period, the automated system search for the moment when the dissolved oxygen has passed by a minimum, as a consequence of the metabolic activity of the microorganisms and right after to a maximum due to the saturation of the water (similar to the self-cycling fermentation, SCF, strategy). The dissolved oxygen signal was sent to a personal computer via data acquisition and control using MATLAB and the SIMULINK package. The system operating under the automated strategy presented a stable operation when the acclimated microorganisms (to an initial concentration of 350 mg 4CP/L), were exposed to a punctual concentration peaks of 600 mg 4CP/L. The 4CP concentrations peaks superior or equals to 1,050 mg/L only disturbed the system from a short to a medium term (one month). The 1,400 mg/L peak caused a shutdown in the metabolic activity of the microorganisms that led to the reactor failure. The biomass acclimated with the SCF strategy can partially support the variations of the toxic influent since, at the moment in which the influent become inhibitory, there is a failure of the system.


Author(s):  
Guixiu Qiao ◽  
Brian A. Weiss

Over time, robots degrade because of age and wear, leading to decreased reliability and increasing potential for faults and failures; this negatively impacts robot availability. Economic factors motivate facilities and factories to improve maintenance operations to monitor robot degradation and detect faults and failures, especially to eliminate unexpected shutdowns. Since robot systems are complex, with sub-systems and components, it is challenging to determine these constituent elements’ specific influence on the overall system performance. The development of monitoring, diagnostic, and prognostic technologies (collectively known as Prognostics and Health Management (PHM)), can aid manufacturers in maintaining the performance of robot systems by providing intelligence to enhance maintenance and control strategies. This paper presents the strategy of integrating top level and component level PHM to detect robot performance degradation (including robot tool center accuracy degradation), supported by the development of a four-layer sensing and analysis structure. The top level PHM can quickly detect robot tool center accuracy degradation through advanced sensing and test methods developed at the National Institute of Standards and Technology (NIST). The component level PHM supports deep data analysis for root cause diagnostics and prognostics. A reference data set is collected and analyzed using the integration of top level PHM and component level PHM to understand the influence of temperature, speed, and payload on robot’s accuracy degradation.


2009 ◽  
Vol 2009 ◽  
pp. 1-8 ◽  
Author(s):  
Janet Myhre ◽  
Daniel R. Jeske ◽  
Michael Rennie ◽  
Yingtao Bi

A heteroscedastic linear regression model is developed from plausible assumptions that describe the time evolution of performance metrics for equipment. The inherited motivation for the related weighted least squares analysis of the model is an essential and attractive selling point to engineers with interest in equipment surveillance methodologies. A simple test for the significance of the heteroscedasticity suggested by a data set is derived and a simulation study is used to evaluate the power of the test and compare it with several other applicable tests that were designed under different contexts. Tolerance intervals within the context of the model are derived, thus generalizing well-known tolerance intervals for ordinary least squares regression. Use of the model and its associated analyses is illustrated with an aerospace application where hundreds of electronic components are continuously monitored by an automated system that flags components that are suspected of unusual degradation patterns.


Sign in / Sign up

Export Citation Format

Share Document