locomotion mode
Recently Published Documents


TOTAL DOCUMENTS

97
(FIVE YEARS 37)

H-INDEX

17
(FIVE YEARS 2)

2021 ◽  
Vol 21 (23) ◽  
pp. 26311-26319
Author(s):  
Yifan Zhao ◽  
Jiaxin Wang ◽  
Yifei Zhang ◽  
Hejian Liu ◽  
Zi'ang Chen ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7473
Author(s):  
Binbin Su ◽  
Yi-Xing Liu ◽  
Elena M. Gutierrez-Farewik

People walk on different types of terrain daily; for instance, level-ground walking, ramp and stair ascent and descent, and stepping over obstacles are common activities in daily life. Movement patterns change as people move from one terrain to another. The prediction of transitions between locomotion modes is important for developing assistive devices, such as exoskeletons, as the optimal assistive strategies may differ for different locomotion modes. The prediction of locomotion mode transitions is often accompanied by gait-event detection that provides important information during locomotion about critical events, such as foot contact (FC) and toe off (TO). In this study, we introduce a method to integrate locomotion mode prediction and gait-event identification into one machine learning framework, comprised of two multilayer perceptrons (MLP). Input features to the framework were from fused data from wearable sensors—specifically, electromyography sensors and inertial measurement units. The first MLP successfully identified FC and TO, FC events were identified accurately, and a small number of misclassifications only occurred near TO events. A small time difference (2.5 ms and −5.3 ms for FC and TO, respectively) was found between predicted and true gait events. The second MLP correctly identified walking, ramp ascent, and ramp descent transitions with the best aggregate accuracy of 96.3%, 90.1%, and 90.6%, respectively, with sufficient prediction time prior to the critical events. The models in this study demonstrate high accuracy in predicting transitions between different locomotion modes in the same side’s mid- to late stance of the stride prior to the step into the new mode using data from EMG and IMU sensors. Our results may help assistive devices achieve smooth and seamless transitions in different locomotion modes for those with motor disorders.


2021 ◽  
Author(s):  
Brokoslaw Laschowski ◽  
William McNally ◽  
Alexander Wong ◽  
John McPhee

Robotic leg prostheses and exoskeletons can provide powered locomotor assistance to older adults and/or persons with physical disabilities. However, the current locomotion mode recognition systems being developed for intelligent high-level control and decision-making use mechanical, inertial, and/or neuromuscular data, which inherently have limited prediction horizons (i.e., analogous to walking blindfolded). Inspired by the human vision-locomotor control system, we designed and evaluated an advanced environment classification system that uses computer vision and deep learning to forward predict the oncoming walking environments prior to physical interaction, therein allowing for more accurate and robust locomotion mode transitions. In this study, we first reviewed the development of the ExoNet database – the largest and most diverse open-source dataset of wearable camera images of indoor and outdoor real-world walking environments, which were annotated using a hierarchical labelling architecture. We then trained and tested over a dozen state-of-the-art deep convolutional neural networks (CNNs) on the ExoNet database for large-scale image classification of the walking environments, including: EfficientNetB0, InceptionV3, MobileNet, MobileNetV2, VGG16, VGG19, Xception, ResNet50, ResNet101, ResNet152, DenseNet121, DenseNet169, and DenseNet201. Lastly, we quantitatively compared the benchmarked CNN architectures and their environment classification predictions using an operational metric called NetScore, which balances the image classification accuracy with the computational and memory storage requirements (i.e., important for onboard real-time inference). Although we designed this environment classification system to support the development of next-generation environment-adaptive locomotor control systems for robotic prostheses and exoskeletons, applications could extend to humanoids, autonomous legged robots, powered wheelchairs, and assistive devices for persons with visual impairments.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Gang Du ◽  
Jinchen Zeng ◽  
Cheng Gong ◽  
Enhao Zheng

Recognizing locomotion modes is a crucial step in controlling lower-limb exoskeletons/orthoses. Our study proposed a fuzzy-logic-based locomotion mode/transition recognition approach that uses the onrobot inertial sensors for a hip joint exoskeleton (active pelvic orthosis). The method outputs the recognition decisions at each extreme point of the hip joint angles purely relying on the integrated inertial sensors. Compared with the related studies, our approach enables calibrations and recognition without additional sensors on the feet. We validated the method by measuring four locomotion modes and eight locomotion transitions on three able-bodied subjects wearing an active pelvic orthosis (APO). The average recognition accuracy was 92.46% for intrasubject crossvalidation and 93.16% for intersubject crossvalidation. The average time delay during the transitions was 1897.9 ms (28.95% one gait cycle). The results were at the same level as the related studies. On the other side, the study is limited in the small sample size of the subjects, and the results are preliminary. Future efforts will be paid on more extensive evaluations in practical applications.


Robotica ◽  
2021 ◽  
pp. 1-22
Author(s):  
Kenji Nagaoka ◽  
Toshiyasu Kaneko ◽  
Kazuya Yoshida

Abstract This paper presents bimodal mobility actuated by inertial forces with elastic bodies for an exploration robot in a microgravity environment. The proposed bimodal locomotion mechanism can selectively achieve vibration propulsion or rotational hopping mode based on centrifugal force and reaction torque exerted by the control of a single eccentric motor, where the rotational hopping is the primary locomotion mode for practical applications. The bimodal mobility performance under microgravity is experimentally examined using an air-floating testbed. Furthermore, we also present theoretical modeling of the bimodal mobility system, and the model is verified by comparison with the experiments.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2785
Author(s):  
Dongbin Shin ◽  
Seungchan Lee ◽  
Seunghoon Hwang

The number of elderly people has increased as life expectancy increases. As muscle strength decreases with aging, it is easy to feel tired while walking, which is an activity of daily living (ADL), or suffer a fall accident. To compensate the walking problems, the terrain environment must be considered, and in this study, we developed the locomotion mode recognition (LMR) algorithm based on the gaussian mixture model (GMM) using inertial measurement unit (IMU) sensors to classify the five terrains (level walking, stair ascent/descent, ramp ascent/descent). In order to meet the walking conditions of the elderly people, the walking speed index from 20 to 89 years old was used, and the beats per minute (BPM) method was adopted considering the speed range for each age groups. The experiment was conducted with the assumption that the healthy people walked according to the BPM rhythm, and to apply the algorithm to the exoskeleton robot later, a full/individual dependent model was used by selecting a data collection method. Regarding the full dependent model as the representative model, the accuracy of classifying the stair terrains and level walking/ramp terrains is BPM 90: 98.74%, 95.78%, BPM 110: 99.33%, 95.75%, and BPM 130: 98.39%, 87.54%, respectively. The consumption times were 14.5, 21.1, and 14 ms according to BPM 90/110/130, respectively. LMR algorithm that satisfies the high classification accuracy according to walking speed has been developed. In the future, the LMR algorithm will be applied to the actual hip exoskeleton robot, and the gait phase estimation algorithm that estimates the user’s gait intention is to be combined. Additionally, when a user wearing a hip exoskeleton robot walks, we will check whether the combined algorithm properly supports the muscle strength.


2021 ◽  
Author(s):  
Brokoslaw Laschowski ◽  
William McNally ◽  
Alexander Wong ◽  
John McPhee

Robotic exoskeletons require human control and decision making to switch between different locomotion modes, which can be inconvenient and cognitively demanding. To support the development of automated locomotion mode recognition systems (i.e., high-level controllers), we designed an environment recognition system using computer vision and deep learning. We collected over 5.6 million images of indoor and outdoor real-world walking environments using a wearable camera system, of which ~923,000 images were annotated using a 12-class hierarchical labelling architecture (called the ExoNet database). We then trained and tested the EfficientNetB0 convolutional neural network, designed for efficiency using neural architecture search, to predict the different walking environments. Our environment recognition system achieved ~73% image classification accuracy. While these preliminary results benchmark EfficientNetB0 on the ExoNet database, further research is needed to compare different image classification algorithms to develop an accurate and real-time environment-adaptive locomotion mode recognition system for robotic exoskeleton control.


PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e10950
Author(s):  
Sean A. Rands

Eye blinking is an essential maintenance behaviour for many terrestrial animals, but is also a risky behaviour as the animal is unable to scan the environment and detect hazards while its eyes are temporarily closed. It is therefore likely that the length of time that the eyes are closed and the length of the gap between blinks for a species may reflect aspects of the ecology of that species, such as its social or physical environment. An earlier published study conducted a comparative study linking blinking behaviour and ecology, and detailed a dataset describing the blinking behaviour of a large number of primate species that was collected from captive animals, but the analysis presented did not control for the nonindependence of the data due to common evolutionary history. In the present study, the dataset is reanalysed using phylogenetic comparative methods, after reconsideration of the parameters describing the physical and social environments of the species. I find that blink rate is best described by the locomotion mode of a species, where species moving through arboreal environments blink least, ground-living species blink most, and species that use both environments show intermediate rates. The duration of a blink was also related to locomotion mode, and positively correlated with both mean species group size and mean species body mass, although the increase in relation to group size is small. How a species moves through the environment therefore appears to be important for determining blinking behaviour, and suggests that complex arboreal environments may require less interruption to visual attention. Given that the data were collected with captive individuals, caution is recommended for interpreting the correlations found.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1264
Author(s):  
Freddie Sherratt ◽  
Andrew Plummer ◽  
Pejman Iravani

Human Locomotion Mode Recognition (LMR) has the potential to be used as a control mechanism for lower-limb active prostheses. Active prostheses can assist and restore a more natural gait for amputees, but as a medical device it must minimize user risks, such as falls and trips. As such, any control system must have high accuracy and robustness, with a detailed understanding of its internal operation. Long Short-Term Memory (LSTM) machine-learning networks can perform LMR with high accuracy levels. However, the internal behavior during classification is unknown, and they struggle to generalize when presented with novel users. The target problem addressed in this paper is understanding the LSTM classification behavior for LMR. A dataset of six locomotive activities (walking, stopped, stairs and ramps) from 22 non-amputee subjects is collected, capturing both steady-state and transitions between activities in natural environments. Non-amputees are used as a substitute for amputees to provide a larger dataset. The dataset is used to analyze the internal behavior of a reduced complexity LSTM network. This analysis identifies that the model primarily classifies activity type based on data around early stance. Evaluation of generalization for unseen subjects reveals low sensitivity to hyper-parameters and over-fitting to individuals’ gait traits. Investigating the differences between individual subjects showed that gait variations between users primarily occur in early stance, potentially explaining the poor generalization. Adjustment of hyper-parameters alone could not solve this, demonstrating the need for individual personalization of models. The main achievements of the paper are (i) the better understanding of LSTM for LMR, (ii) demonstration of its low sensitivity to learning hyper-parameters when evaluating novel user generalization, and (iii) demonstration of the need for personalization of ML models to achieve acceptable accuracy.


2021 ◽  
Vol 39 (1) ◽  
pp. 59-80
Author(s):  
James F. M. Cornwell ◽  
Olivia Mandelbaum ◽  
Allison Turza Bajger ◽  
Raymond D. Crookes ◽  
David H. Krantz ◽  
...  

Moral psychology is used to explore the interaction between regulatory mode (locomotion; assessment) and diurnal preference (“early birds”; “night owls”). Moral and immoral behavior was partly explained by an interaction between regulatory mode and the time of day the task took place. In Studies 1a and 1b, we established a relation between self-reported diurnal preference and regulatory mode using both a chronic measure and an induction: stronger locomotion preferring an earlier time of day; stronger assessment preferring a later time of day. In Study 2, we show that those with a locomotion predominance were less likely to invest in a public good later in the day compared to those with an assessment predominance. Lastly, in Study 3, those induced into an assessment mode were more likely to cheat when randomly assigned to complete a task in the morning compared to those induced into a locomotion mode.


Sign in / Sign up

Export Citation Format

Share Document