scholarly journals Evaluating Arm Movement Imitation

Author(s):  
Alexandra Constantin ◽  
Maja Matarić

In this paper, we present a metric for assessing the quality of arm movement imitation. We develop a joint-rotational-angle-based segmentation and comparison algorithm that rates pairwise similarity of arm movement trajectories on a scale of 1-10. We describe an empirical study designed to validate the algorithm we developed, by comparing it to human evaluation of imitation. The results provide evidence that the evaluation of the automatic metric did not significantly differ from human evaluation.

2018 ◽  
Vol 8 (2) ◽  
pp. 35-48
Author(s):  
Jiří Rybička ◽  
Petra Čačková

One of the tools to determine the recommended order of the courses to be taught is to set the prerequisites, that is, the conditions that have to be fulfilled before commencing the study of the course. The recommended sequence of courses is to follow logical links between their logical units, as the basic aim is to provide students with a coherent system according to the Comenius' principle of continuity. Declared continuity may, on the other hand, create organizational complications when passing through the study, as failure to complete one course may result in a whole sequence of forced deviations from the recommended curriculum and ultimately in the extension of the study period. This empirical study deals with the quantitative evaluation of the influence of the level of initial knowledge given by the previous study on the overall results in a certain follow-up course. In this evaluation, data were obtained that may slightly change the approach to determining prerequisites for higher education courses.


2020 ◽  
Vol 65 (1) ◽  
pp. 181-205
Author(s):  
Hye-Yeon Chung

AbstractHuman evaluation (HE) of translation is generally considered to be valid, but it requires a lot of effort. Automatic evaluation (AE) which assesses the quality of machine translations can be done easily, but it still requires validation. This study addresses the questions of whether and how AE can be used for human translations. For this purpose AE formulas and HE criteria were compared to each other in order to examine the validity of AE. In the empirical part of the study, 120 translations were evaluated by professional translators as well as by two representative AE-systems, BLEU/ METEOR, respectively. The correlations between AE and HE were relatively high at 0.849** (BLEU) and 0.862** (METEOR) in the overall analysis, but in the ratings of the individual texts, AE and ME exhibited a substantial difference. The AE-ME correlations were often below 0.3 or even in the negative range. Ultimately, the results indicate that neither METEOR nor BLEU can be used to assess human translation at this stage. But this paper suggests three possibilities to apply AE to compromise the weakness of HE.


2021 ◽  
Vol 26 (6) ◽  
Author(s):  
Pooja Rani ◽  
Sebastiano Panichella ◽  
Manuel Leuenberger ◽  
Mohammad Ghafari ◽  
Oscar Nierstrasz

Abstract Context Previous studies have characterized code comments in various programming languages, showing how high quality of code comments is crucial to support program comprehension activities, and to improve the effectiveness of maintenance tasks. However, very few studies have focused on understanding developer practices to write comments. None of them has compared such developer practices to the standard comment guidelines to study the extent to which developers follow the guidelines. Objective Therefore, our goal is to investigate developer commenting practices and compare them to the comment guidelines. Method This paper reports the first empirical study investigating commenting practices in Pharo Smalltalk. First, we analyze class comment evolution over seven Pharo versions. Then, we quantitatively and qualitatively investigate the information types embedded in class comments. Finally, we study the adherence of developer commenting practices to the official class comment template over Pharo versions. Results Our results show that there is a rapid increase in class comments in the initial three Pharo versions, while in subsequent versions developers added comments to both new and old classes, thus maintaining a similar code to comment ratio. We furthermore found three times as many information types in class comments as those suggested by the template. However, the information types suggested by the template tend to be present more often than other types of information. Additionally, we find that a substantial proportion of comments follow the writing style of the template in writing these information types, but they are written and formatted in a non-uniform way. Conclusion The results suggest the need to standardize the commenting guidelines for formatting the text, and to provide headers for the different information types to ensure a consistent style and to identify the information easily. Given the importance of high-quality code comments, we draw numerous implications for developers and researchers to improve the support for comment quality assessment tools.


Sign in / Sign up

Export Citation Format

Share Document