scholarly journals An Automatic Evaluation Method for Parkinson's Dyskinesia Using Finger Tapping Video for Small Samples

Author(s):  
Zhu Li ◽  
lu kang ◽  
Miao Cai ◽  
Xiaoli Liu ◽  
Yanwen Wang ◽  
...  

Abstract PurposeThe assessment of dyskinesia in Parkinson's disease (PD) based on Artificial Intelligence technology is a significant and challenging task. At present, doctors usually use MDS-UPDRS scale to assess the severity of patients. This method is time-consuming and laborious, and there are subjective differences. The evaluation method based on sensor equipment is also widely used, but this method is expensive and needs professional guidance, which is not suitable for remote evaluation and patient self-examination. In addition, it is difficult to collect patient data in medical research, so it is of great significance to find an objective and automatic assessment method for Parkinson's dyskinesia based on small samples.MethodsIn this study, we design an automatic evaluation method combining manual features and convolutional neural network (CNN), which is suitable for small sample classification. Based on the finger tapping video of Parkinson's patients, we use the pose estimation model to obtain the action skeleton information and calculate the feature data. We then use the 5-folds cross validation training model to achieve optimum trade-of between bias and variance, and finally make multi-class prediction through fully connected network (FCN). ResultsOur proposed method achieves the current optimal accuracy of 79.7% in this research. We have compared with the latest methods of related research, and our method is superior to them in terms of accuracy, number of parameters and FLOPs. ConclusionThe method in this paper does not require patients to wear sensor devices, and has obvious advantages in remote clinical evaluation. At the same time, the method of using motion feature data to train CNN model obtains the optimal accuracy, effectively solves the problem of difficult data acquisition in medicine, and provides a new idea for small sample classification.

2001 ◽  
Vol 2 (1) ◽  
pp. 28-34 ◽  
Author(s):  
Edward R. Dougherty

In order to study the molecular biological differences between normal and diseased tissues, it is desirable to perform classification among diseases and stages of disease using microarray-based gene-expression values. Owing to the limited number of microarrays typically used in these studies, serious issues arise with respect to the design, performance and analysis of classifiers based on microarray data. This paper reviews some fundamental issues facing small-sample classification: classification rules, constrained classifiers, error estimation and feature selection. It discusses both unconstrained and constrained classifier design from sample data, and the contributions to classifier error from constrained optimization and lack of optimality owing to design from sample data. The difficulty with estimating classifier error when confined to small samples is addressed, particularly estimating the error from training data. The impact of small samples on the ability to include more than a few variables as classifier features is explained.


1994 ◽  
Vol 33 (02) ◽  
pp. 180-186 ◽  
Author(s):  
H. Brenner ◽  
O. Gefeller

Abstract:The traditional concept of describing the validity of a diagnostic test neglects the presence of chance agreement between test result and true (disease) status. Sensitivity and specificity, as the fundamental measures of validity, can thus only be considered in conjunction with each other to provide an appropriate basis for the evaluation of the capacity of the test to discriminate truly diseased from truly undiseased subjects. In this paper, chance-corrected analogues of sensitivity and specificity are presented as supplemental measures of validity, which pay attention to the problem of chance agreement and offer the opportunity to be interpreted separately. While recent proposals of chance-correction techniques, suggested by several authors in this context, lead to measures which are dependent on disease prevalence, our method does not share this major disadvantage. We discuss the extension of the conventional ROC-curve approach to chance-corrected measures of sensitivity and specificity. Furthermore, point and asymptotic interval estimates of the parameters of interest are derived under different sampling frameworks for validation studies. The small sample behavior of the estimates is investigated in a simulation study, leading to a logarithmic modification of the interval estimate in order to hold the nominal confidence level for small samples.


2013 ◽  
Vol 748 ◽  
pp. 1256-1261
Author(s):  
Shou Hui He ◽  
Han Hua Zhu ◽  
Shi Dong Fan ◽  
Quan Wen

At the present time, the Dow Chemical Fire and Explosion Index (F&EI) is a kind of risk index evaluation method that is comprehensively used in evaluating potential hazard, area of exposure, expected losses in case of fire and explosion, etc. As the research object to oil depot storage tank area, this article ultimately confirms establishing appropriate pattern of process unit as well as reasonable safety precautions compensating method, in order to insure the reasonableness of evaluating result, by means of selecting process unit, confirming material factor and compensating safety precautions, using F&EI method. This can provide the basis for theoretical ground in aspect of oil depot development and safety production management.


2021 ◽  
Vol 29 (3) ◽  
Author(s):  
Péter Orosz ◽  
Tamás Tóthfalusi

AbstractThe increasing number of Voice over LTE deployments and IP-based voice services raise the demand for their user-centric service quality monitoring. This domain’s leading challenge is measuring user experience quality reliably without performing subjective assessments or applying the standard full-reference objective models. While the former is time- and resource-consuming and primarily executed ad-hoc, the latter depends upon a reference source and processes the voice payload that may offend user privacy. This paper presents a packet-level measurement method (introducing a novel metric set) to objectively assess network and service quality online. It is accomplished without inspecting the voice payload and needing the reference voice sample. The proposal has three contributions: (i) our method focuses on the timeliness of the media traffic. It introduces new performance metrics that describe and measure the service’s time-domain behavior from the voice application viewpoint. (ii) Based on the proposed metrics, we also present a no-reference Quality of Experience (QoE) estimation model. (iii) Additionally, we propose a new method to identify the pace of the speech (slow or dynamic) as long as voice activity detection (VAD) is present between the endpoints. This identification supports the introduced quality model to estimate the perceived quality with higher accuracy. The performance of the proposed model is validated against a full-reference voice quality estimation model called AQuA, using real VoIP traffic (originated in assorted voice samples) in controlled transmission scenarios.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Florent Le Borgne ◽  
Arthur Chatton ◽  
Maxime Léger ◽  
Rémi Lenain ◽  
Yohann Foucher

AbstractIn clinical research, there is a growing interest in the use of propensity score-based methods to estimate causal effects. G-computation is an alternative because of its high statistical power. Machine learning is also increasingly used because of its possible robustness to model misspecification. In this paper, we aimed to propose an approach that combines machine learning and G-computation when both the outcome and the exposure status are binary and is able to deal with small samples. We evaluated the performances of several methods, including penalized logistic regressions, a neural network, a support vector machine, boosted classification and regression trees, and a super learner through simulations. We proposed six different scenarios characterised by various sample sizes, numbers of covariates and relationships between covariates, exposure statuses, and outcomes. We have also illustrated the application of these methods, in which they were used to estimate the efficacy of barbiturates prescribed during the first 24 h of an episode of intracranial hypertension. In the context of GC, for estimating the individual outcome probabilities in two counterfactual worlds, we reported that the super learner tended to outperform the other approaches in terms of both bias and variance, especially for small sample sizes. The support vector machine performed well, but its mean bias was slightly higher than that of the super learner. In the investigated scenarios, G-computation associated with the super learner was a performant method for drawing causal inferences, even from small sample sizes.


2011 ◽  
Vol 6 (2) ◽  
pp. 252-277 ◽  
Author(s):  
Stephen T. Ziliak

AbstractStudent's exacting theory of errors, both random and real, marked a significant advance over ambiguous reports of plant life and fermentation asserted by chemists from Priestley and Lavoisier down to Pasteur and Johannsen, working at the Carlsberg Laboratory. One reason seems to be that William Sealy Gosset (1876–1937) aka “Student” – he of Student'st-table and test of statistical significance – rejected artificial rules about sample size, experimental design, and the level of significance, and took instead an economic approach to the logic of decisions made under uncertainty. In his job as Apprentice Brewer, Head Experimental Brewer, and finally Head Brewer of Guinness, Student produced small samples of experimental barley, malt, and hops, seeking guidance for industrial quality control and maximum expected profit at the large scale brewery. In the process Student invented or inspired half of modern statistics. This article draws on original archival evidence, shedding light on several core yet neglected aspects of Student's methods, that is, Guinnessometrics, not discussed by Ronald A. Fisher (1890–1962). The focus is on Student's small sample, economic approach to real error minimization, particularly in field and laboratory experiments he conducted on barley and malt, 1904 to 1937. Balanced designs of experiments, he found, are more efficient than random and have higher power to detect large and real treatment differences in a series of repeated and independent experiments. Student's world-class achievement poses a challenge to every science. Should statistical methods – such as the choice of sample size, experimental design, and level of significance – follow the purpose of the experiment, rather than the other way around? (JEL classification codes: C10, C90, C93, L66)


2016 ◽  
Vol 41 (5) ◽  
pp. 472-505 ◽  
Author(s):  
Elizabeth Tipton ◽  
Kelly Hallberg ◽  
Larry V. Hedges ◽  
Wendy Chan

Background: Policy makers and researchers are frequently interested in understanding how effective a particular intervention may be for a specific population. One approach is to assess the degree of similarity between the sample in an experiment and the population. Another approach is to combine information from the experiment and the population to estimate the population average treatment effect (PATE). Method: Several methods for assessing the similarity between a sample and population currently exist as well as methods estimating the PATE. In this article, we investigate properties of six of these methods and statistics in the small sample sizes common in education research (i.e., 10–70 sites), evaluating the utility of rules of thumb developed from observational studies in the generalization case. Result: In small random samples, large differences between the sample and population can arise simply by chance and many of the statistics commonly used in generalization are a function of both sample size and the number of covariates being compared. The rules of thumb developed in observational studies (which are commonly applied in generalization) are much too conservative given the small sample sizes found in generalization. Conclusion: This article implies that sharp inferences to large populations from small experiments are difficult even with probability sampling. Features of random samples should be kept in mind when evaluating the extent to which results from experiments conducted on nonrandom samples might generalize.


PEDIATRICS ◽  
1989 ◽  
Vol 83 (3) ◽  
pp. A72-A72
Author(s):  
Student

The believer in the law of small numbers practices science as follows: 1. He gambles his research hypotheses on small samples without realizing that the odds against him are unreasonably high. He overestimates power. 2. He has undue confidence in early trends (e.g., the data of the first few subjects) and in the stability of observed patterns (e.g., the number and identity of significant results). He overestimates significance. 3. In evaluating replications, his or others', he has unreasonably high expectations about the replicability of significant results. He underestimates the breadth of confidence intervals. 4. He rarely attributes a deviation of results from expectations to sampling variability, because he finds a causal "explanation" for any discrepancy. Thus, he has little opportunity to recognize sampling variation in action. His belief in the law of small numbers, therefore, will forever remain intact.


2018 ◽  
Vol 2018 ◽  
pp. 1-10
Author(s):  
Lifeng Wu ◽  
Yan Chen

To deal with the forecasting with small samples in the supply chain, three grey models with fractional order accumulation are presented. Human judgment of future trends is incorporated into the order number of accumulation. The output of the proposed model will provide decision-makers in the supply chain with more forecasting information for short time periods. The results of practical real examples demonstrate that the model provides remarkable prediction performances compared with the traditional forecasting model.


Sign in / Sign up

Export Citation Format

Share Document