scholarly journals The influence of limb role, direction of movement and limb dominance on movement strategies during block jump-landings in volleyball

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Elia Mercado-Palomino ◽  
Francisco Aragón-Royón ◽  
Jim Richards ◽  
José M. Benítez ◽  
Aurelio Ureña Espa

AbstractThe identification of movement strategies in situations that are as ecologically valid as possible is essential for the understanding of lower limb interactions. This study considered the kinetic and kinematic data for the hip, knee and ankle joints from 376 block jump-landings when moving in the dominant and non-dominant directions from fourteen senior national female volleyball players. Two Machine Learning methods were used to generate the models from the dataset, Random Forest and Artificial Neural Networks. In addition, decision trees were used to detect which variables were relevant to discern the limb movement strategies and to provide a meaningful prediction. The results showed statistically significant differences when comparing the movement strategies between limb role (accuracy > 88.0% and > 89.3%, respectively), and when moving in the different directions but performing the same role (accuracy > 92.3% and > 91.2%, respectively). This highlights the importance of considering limb dominance, limb role and direction of movement during block jump-landings in the identification of which biomechanical variables are the most influential in the movement strategies. Moreover, Machine Learning allows the exploration of how the joints of both limbs interact during sporting tasks, which could provide a greater understanding and identification of risky movements and preventative strategies. All these detailed and valuable descriptions could provide relevant information about how to improve the performance of the players and how to plan trainings in order to avoid an overload that could lead to risk of injury. This highlights that, there is a necessity to consider the learning models, in which the spike approach unilaterally is taught before the block approach (bilaterally). Therefore, we support the idea of teaching bilateral approach before learning the spike, in order to improve coordination and to avoid asymmetries between limbs.

2018 ◽  
Vol 30 (3) ◽  
pp. 387-392 ◽  
Author(s):  
Junya Aizawa ◽  
Kenji Hirohata ◽  
Shunsuke Ohji ◽  
Takehiro Ohmi ◽  
Kazuyoshi Yagishita

Diagnostics ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 40
Author(s):  
Meike Nauta ◽  
Ricky Walsh ◽  
Adam Dubowski ◽  
Christin Seifert

Machine learning models have been successfully applied for analysis of skin images. However, due to the black box nature of such deep learning models, it is difficult to understand their underlying reasoning. This prevents a human from validating whether the model is right for the right reasons. Spurious correlations and other biases in data can cause a model to base its predictions on such artefacts rather than on the true relevant information. These learned shortcuts can in turn cause incorrect performance estimates and can result in unexpected outcomes when the model is applied in clinical practice. This study presents a method to detect and quantify this shortcut learning in trained classifiers for skin cancer diagnosis, since it is known that dermoscopy images can contain artefacts. Specifically, we train a standard VGG16-based skin cancer classifier on the public ISIC dataset, for which colour calibration charts (elliptical, coloured patches) occur only in benign images and not in malignant ones. Our methodology artificially inserts those patches and uses inpainting to automatically remove patches from images to assess the changes in predictions. We find that our standard classifier partly bases its predictions of benign images on the presence of such a coloured patch. More importantly, by artificially inserting coloured patches into malignant images, we show that shortcut learning results in a significant increase in misdiagnoses, making the classifier unreliable when used in clinical practice. With our results, we, therefore, want to increase awareness of the risks of using black box machine learning models trained on potentially biased datasets. Finally, we present a model-agnostic method to neutralise shortcut learning by removing the bias in the training dataset by exchanging coloured patches with benign skin tissue using image inpainting and re-training the classifier on this de-biased dataset.


2021 ◽  
Vol 10 (18) ◽  
pp. 4245
Author(s):  
Jörn Lötsch ◽  
Constantin A. Hintschich ◽  
Petros Petridis ◽  
Jürgen Pade ◽  
Thomas Hummel

Chronic rhinosinusitis (CRS) is often treated by functional endoscopic paranasal sinus surgery, which improves endoscopic parameters and quality of life, while olfactory function was suggested as a further criterion of treatment success. In a prospective cohort study, 37 parameters from four categories were recorded from 60 men and 98 women before and four months after endoscopic sinus surgery, including endoscopic measures of nasal anatomy/pathology, assessments of olfactory function, quality of life, and socio-demographic or concomitant conditions. Parameters containing relevant information about changes associated with surgery were examined using unsupervised and supervised methods, including machine-learning techniques for feature selection. The analyzed cohort included 52 men and 38 women. Changes in the endoscopic Lildholdt score allowed separation of baseline from postoperative data with a cross-validated accuracy of 85%. Further relevant information included primary nasal symptoms from SNOT-20 assessments, and self-assessments of olfactory function. Overall improvement in these relevant parameters was observed in 95% of patients. A ranked list of criteria was developed as a proposal to assess the outcome of functional endoscopic sinus surgery in CRS patients with nasal polyposis. Three different facets were captured, including the Lildholdt score as an endoscopic measure and, in addition, disease-specific quality of life and subjectively perceived olfactory function.


2021 ◽  
Author(s):  
Mustapha Abba ◽  
Chidozie Nduka ◽  
Seun Anjorin ◽  
Shukri Mohamed ◽  
Emmanuel Agogo ◽  
...  

BACKGROUND Due to scientific and technical advancements in the field, published hypertension research has developed during the last decade. Given the huge amount of scientific material published in this field, identifying the relevant information is difficult. We employed topic modelling, which is a strong approach for extracting useful information from enormous amounts of unstructured text. OBJECTIVE To utilize a machine learning algorithm to uncover hidden topics and subtopics from 100 years of peer-reviewed hypertension publications and identify temporal trends. METHODS The titles and abstracts of hypertension papers indexed in PubMed were examined. We used the Latent Dirichlet Allocation (LDA) model to select 20 primary subjects and then ran a trend analysis to see how popular they were over time. RESULTS We gathered 581,750 hypertension-related research articles from 1900 to 2018 and divided them into 20 categories. Preclinical, risk factors, complications, and therapy studies were the categories used to categorise the publications. We discovered themes that were becoming increasingly ‘hot,' becoming less ‘cold,' and being published seldom. Risk variables and major cardiovascular events subjects displayed very dynamic patterns over time (how? – briefly detail here). The majority of the articles (71.2%) had a negative valency, followed by positive (20.6%) and neutral valencies (8.2 percent). Between 1980 and 2000, negative sentiment articles fell somewhat, while positive and neutral sentiment articles climbed significantly. CONCLUSIONS This unique machine learning methodology provided fascinating insights on current hypertension research trends. This method allows researchers to discover study subjects and shifts in study focus, and in the end, it captures the broader picture of the primary concepts in current hypertension research articles. CLINICALTRIAL Not applicable


2020 ◽  
Author(s):  
Samir Gupta ◽  
Shruti Rao ◽  
Trisha Miglani ◽  
Yasaswini Iyer ◽  
Junxia Lin ◽  
...  

AbstractInterpretation of a given variant’s pathogenicity is one of the most profound challenges to realizing the promise of genomic medicine. A large amount of information about associations between variants and diseases used by curators and researchers for interpreting variant pathogenicity is buried in biomedical literature. The development of text-mining tools that can extract relevant information from the literature will speed up and assist the variant interpretation curation process. In this work, we present a text-mining tool, MACE2k that extracts evidence sentences containing associations between variants and diseases from full-length PMC Open Access articles. We use different machine learning models (classical and deep learning) to identify evidence sentences with variant-disease associations. Evaluation shows promising results with the best F1-score of 82.9% and AUC-ROC of 73.9%. Classical ML models had a better recall (96.6% for Random Forest) compared to deep learning models. The deep learning model, Convolutional Neural Network had the best precision (75.6%), which is essential for any curation task.


2021 ◽  
Author(s):  
Arnaud Nguembang Fadja ◽  
Fabrizio Riguzzi ◽  
Giorgio Bertorelle ◽  
Emiliano Trucchi

Abstract Background: With the increase in the size of genomic datasets describing variability in populations, extracting relevant information becomes increasingly useful as well as complex. Recently, computational methodologies such as Supervised Machine Learning and specifically Convolutional Neural Networks have been proposed to order to make inferences on demographic and adaptive processes using genomic data, Even though it was already shown to be powerful and efficient in different fields of investigation, Supervised Machine Learning has still to be explored as to unfold its enormous potential in evolutionary genomics. Results: The paper proposes a method based on Supervised Machine Learning for classifying genomic data, represented as windows of genomic sequences from a sample of individuals belonging to the same population. A Convolutional Neural Network is used to test whether a genomic window shows the signature of natural selection. Experiments performed on simulated data show that the proposed model can accurately predict neutral and selection processes on genomic data with more than 99% accuracy.


UniAssist project is implemented to help students who have completed their Bachelorette degree and are looking forward to study abroad to pursue their higher education such as Masters. Machine Learning would help identify appropriate Universities for such students and suggest them accordingly. UniAssist would help such individuals by recommending those Universities according to their preference of course, country and considering their grades, work experience and qualifications. There is a need for students hoping to pursue higher education outside India to get to know about proper universities. Data collected is then converted into relevant information that is currently not easily available such as courses offered by their dream universities, the avg. tuition fee and even the avg. expense of living near the chosen university on single mobile app based software platform. This is the first phase of the admission process for every student. The machine-learning algorithm used is Collaborative filtering memory-based approach using KNN calculated using cosine similarity. A mobile-based software application is implemented in order to help and guide students for their higher education.


2021 ◽  
Vol 15 ◽  
Author(s):  
Jacob Tryon ◽  
Ana Luisa Trejos

Wearable robotic exoskeletons have emerged as an exciting new treatment tool for disorders affecting mobility; however, the human–machine interface, used by the patient for device control, requires further improvement before robotic assistance and rehabilitation can be widely adopted. One method, made possible through advancements in machine learning technology, is the use of bioelectrical signals, such as electroencephalography (EEG) and electromyography (EMG), to classify the user's actions and intentions. While classification using these signals has been demonstrated for many relevant control tasks, such as motion intention detection and gesture recognition, challenges in decoding the bioelectrical signals have caused researchers to seek methods for improving the accuracy of these models. One such method is the use of EEG–EMG fusion, creating a classification model that decodes information from both EEG and EMG signals simultaneously to increase the amount of available information. So far, EEG–EMG fusion has been implemented using traditional machine learning methods that rely on manual feature extraction; however, new machine learning methods have emerged that can automatically extract relevant information from a dataset, which may prove beneficial during EEG–EMG fusion. In this study, Convolutional Neural Network (CNN) models were developed using combined EEG–EMG inputs to determine if they have potential as a method of EEG–EMG fusion that automatically extracts relevant information from both signals simultaneously. EEG and EMG signals were recorded during elbow flexion–extension and used to develop CNN models based on time–frequency (spectrogram) and time (filtered signal) domain image inputs. The results show a mean accuracy of 80.51 ± 8.07% for a three-class output (33.33% chance level), with an F-score of 80.74%, using time–frequency domain-based models. This work demonstrates the viability of CNNs as a new method of EEG–EMG fusion and evaluates different signal representations to determine the best implementation of a combined EEG–EMG CNN. It leverages modern machine learning methods to advance EEG–EMG fusion, which will ultimately lead to improvements in the usability of wearable robotic exoskeletons.


Sign in / Sign up

Export Citation Format

Share Document