scholarly journals Wearable Assistive Robotics: A Perspective on Current Challenges and Future Trends

Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6751
Author(s):  
Uriel Martinez-Hernandez ◽  
Benjamin Metcalfe ◽  
Tareq Assaf ◽  
Leen Jabban ◽  
James Male ◽  
...  

Wearable assistive robotics is an emerging technology with the potential to assist humans with sensorimotor impairments to perform daily activities. This assistance enables individuals to be physically and socially active, perform activities independently, and recover quality of life. These benefits to society have motivated the study of several robotic approaches, developing systems ranging from rigid to soft robots with single and multimodal sensing, heuristics and machine learning methods, and from manual to autonomous control for assistance of the upper and lower limbs. This type of wearable robotic technology, being in direct contact and interaction with the body, needs to comply with a variety of requirements to make the system and assistance efficient, safe and usable on a daily basis by the individual. This paper presents a brief review of the progress achieved in recent years, the current challenges and trends for the design and deployment of wearable assistive robotics including the clinical and user need, material and sensing technology, machine learning methods for perception and control, adaptability and acceptability, datasets and standards, and translation from lab to the real world.

Author(s):  
Paul van Gent ◽  
Timo Melman ◽  
Haneen Farah ◽  
Nicole van Nes ◽  
Bart van Arem

The present study aims to add to the literature on driver workload prediction using machine learning methods. The main aim is to develop workload prediction on a multi-level basis, rather than a binary high/low distinction as often found in literature. The presented approach relies on measures that can be obtained unobtrusively in the driving environment with off-the-shelf sensors, and on machine learning methods that can be implemented in low-power embedded systems. Two simulator studies were performed, one inducing workload using realistic driving conditions, and one inducing workload with a relatively demanding lane-keeping task. Individual and group-based machine learning models were trained on both datasets and evaluated. For the group-based models the generalizing capability, that is the performance when predicting data from previously unseen individuals, was also assessed. Results show that multi-level workload prediction on the individual and group level works well, achieving high correct rates and accuracy scores. Generalizing between individuals proved difficult using realistic driving conditions but worked well in the highly demanding lane-keeping task. Reasons for this discrepancy are discussed as well as future research directions.


2020 ◽  
pp. 030573562092842
Author(s):  
Liang Xu ◽  
Xin Wen ◽  
Jiaming Shi ◽  
Shutong Li ◽  
Yuhan Xiao ◽  
...  

Music emotion information is widely used in music information retrieval, music recommendation, music therapy, and so forth. In the field of music emotion recognition (MER), computer scientists extract musical features to identify musical emotions, but this method ignores listeners’ individual differences. Applying machine learning methods, this study formed relations among audio features, individual factors, and music emotions. We used audio features and individual features as inputs to predict the perceived emotion and felt emotion of music, respectively. The results show that real-time individual features (e.g., preference for target music and mechanism indices) can significantly improve the model’s effect, and stable individual features (e.g., sex, music experience, and personality) have no effect. Compared with the recognition models of perceived emotions, the individual features have greater effects on the recognition models of felt emotions.


2019 ◽  
Vol 9 (4) ◽  
pp. 4554-4560 ◽  
Author(s):  
Y. L. Ng ◽  
X. Jiang ◽  
Y. Zhang ◽  
S. B. Shin ◽  
R. Ning

Exoskeletons are wearable devices for enhancing human physical performance and for studying actions and movements. They are worn on the body for additional power and load-carrying capacity. Exoskeletons can be controlled using signals from the muscles. In recent years, gait analysis has attracted increasing attention from fields such as animation, athletic performance analysis, and robotics. Gait patterns are unique, and each individual has his or her own distinct gait pattern characteristics. Gait analysis can monitor activity in sensitive areas. This paper uses various machine learning algorithms to predict the activity of subjects using exoskeletons. Here, localization data from the UIC machine learning repository are used to recognize activities with gait positions. The study also compares five machine learning methods and examines their efficiency and accuracy in activity prediction for three different subjects. The results for the various machine learning methods along with efficiency and accuracy results are discussed.


2021 ◽  
Author(s):  
Per Kummervold ◽  
Sam Martin ◽  
Sara Dada ◽  
Eliz Kilich ◽  
Chermain Denny ◽  
...  

BACKGROUND With growing conversations online and less than desired maternal vaccination uptake rates, these conversations could provide useful insight to inform future interventions. Automated processes for this type of analysis, such as natural language processing (NLP), have faced challenges extracting complex stances, like attitudes toward vaccines, from large text. OBJECTIVE In this study, we aimed to build upon recent advances in Transformer-based machine learning methods, and test if this could be used as a tool to assess the stance of social media posts towards vaccination during pregnancy. METHODS A total of 16,604 Tweets posted between 1 November 2018 and 30 April 2019 were selected by boolean searches related to maternal vaccination. Tweets were coded by three individual researchers into the categories “Promotional”, “Discouraging”, “Ambiguous” and “Neutral” After creating a final dataset of 2,722 unique tweets, multiple machine learning methods were trained on the dataset and then tested and compared to the human annotators. RESULTS We received an accuracy of 81.8% (F-score= 0.78) compared to the agreed score between the three annotators. For comparison, the accuracies of the individual annotators compared to the final score were 83.3%, 77.9% and 77.5%. CONCLUSIONS This study demonstrates the ability to achieve close to the same accuracy in categorising tweets using our machine learning models as could be expected by a single human annotator. The potential to use this reliable and accurate automated process could free up valuable time and resource constraints of conducting this analysis, in addition to inform potentially effective and necessary interventions. CLINICALTRIAL N/A


2021 ◽  
Vol 5 (2) ◽  
pp. 284-303
Author(s):  
J A Putri ◽  
Suhartono Suhartono ◽  
H Prabowo ◽  
N A Salehah ◽  
D D Prastyo ◽  
...  

Most research about the inflow and outflow currency in Indonesia showed that these data contained both linear and nonlinear patterns with calendar variation effect. The goal of this research is to propose a hybrid model by combining ARIMAX and Deep Neural Network (DNN), known as hybrid ARIMAX-DNN, for improving the forecast accuracy in the currency prediction in East Java, Indonesia. ARIMAX is class of classical time series models that could accurately handle linear pattern and calendar variation effect. Whereas, DNN is known as a machine learning method that powerful to tackle a nonlinear pattern. Data about 32 denominations of inflow and outflow currency in East Java are used as case studies. The best model was selected based on the smallest value of RMSE and sMAPE at the testing dataset. The results showed that the hybrid ARIMAX-DNN model improved the forecast accuracy and outperformed the individual models, both ARIMAX and DNN, at 26 denominations of inflow and outflow currency. Hence, it can be concluded that hybrid classical time series and machine learning methods tend to yield more accurate forecasts than individual models, both classical time series and machine learning methods.


Author(s):  
M.A. Shirobokova ◽  
A.V. Letchikov

The requirements for a more accurate assessment of the individual risk of a borrower became more complicated with the introduction of Basel II and IFRS 9. Such risk assessment is more and more often carried out using the construction of scoring models, however, as a rule, the Gini coefficient acts as a quality criterion for the constructed models, and the influence of modeling on financial component, namely on the return on equity, which acts as the basis for doing business in the field of lending, is not investigated at all. In this regard, the article proposes a methodology for assessing the return on equity without taking into account risk and its complication by taking into account the individual risk of a borrower. The construction of a dynamic model for assessing credit risk in the article is considered on the basis of survival models constructed by machine learning methods. The problem of accounting for censored data is solved using specific construction of variables for the model and methods that take into account censorship: logistic regression, Cox proportional risk model, random survival forest model. On the example of the data of a regional commercial bank, the return on equity is estimated and compared, depending on the choice of a risk assessment model. The result of the study is the conclusion that it is necessary to apply the methodology for calculating the return on equity taking into account risk assessed by the machine learning method.


2021 ◽  
Vol 23 (1) ◽  
Author(s):  
Asmir Vodencarevic ◽  
◽  
Koray Tascilar ◽  
Fabian Hartmann ◽  
Michaela Reiser ◽  
...  

Abstract Background Biological disease-modifying anti-rheumatic drugs (bDMARDs) can be tapered in some rheumatoid arthritis (RA) patients in sustained remission. The purpose of this study was to assess the feasibility of building a model to estimate the individual flare probability in RA patients tapering bDMARDs using machine learning methods. Methods Longitudinal clinical data of RA patients on bDMARDs from a randomized controlled trial of treatment withdrawal (RETRO) were used to build a predictive model to estimate the probability of a flare. Four basic machine learning models were trained, and their predictions were additionally combined to train an ensemble learning method, a stacking meta-classifier model to predict the individual flare probability within 14 weeks after each visit. Prediction performance was estimated using nested cross-validation as the area under the receiver operating curve (AUROC). Predictor importance was estimated using the permutation importance approach. Results Data of 135 visits from 41 patients were included. A model selection approach based on nested cross-validation was implemented to find the most suitable modeling formalism for the flare prediction task as well as the optimal model hyper-parameters. Moreover, an approach based on stacking different classifiers was successfully applied to create a powerful and flexible prediction model with the final measured AUROC of 0.81 (95%CI 0.73–0.89). The percent dose change of bDMARDs, clinical disease activity (DAS-28 ESR), disease duration, and inflammatory markers were the most important predictors of a flare. Conclusion Machine learning methods were deemed feasible to predict flares after tapering bDMARDs in RA patients in sustained remission.


Sign in / Sign up

Export Citation Format

Share Document