evaluation practices
Recently Published Documents


TOTAL DOCUMENTS

288
(FIVE YEARS 79)

H-INDEX

21
(FIVE YEARS 3)

Author(s):  
Kathryn A. Morbitzer ◽  
Jacqueline E. McLaughlin ◽  
Brianna Henson ◽  
Kyle T. Fassett ◽  
Margarita V. DiVall

Author(s):  
Diana Acosta-Salazar

The evaluation was not until a little more than two decades ago a relevant matter for public activity, concentrated in execution and guided by intuition, public approval or some data to record success in government work. This story has changed due to an increasingly demanding national and international context requiering transparency of public actions, efficiency in activities that each government in turn prioritizes, and of course, the effectiveness of what is proposed. The practice of evaluation in the Costa Rican state system is governed by an exhaustive normative and procedural framework. However, this platform has not necessarily ruled the execution of communication in the institutions. According to a study performed out in Costa Rican institutions between 2019-2020, first with a mapping of the communication units carried out with a survey (43) examining their operation, projects they execute and some evaluation practices they carry out; lack of rigorous evaluation practices were identified. Furthermore, these units there has no obligation to carry out operational planning of their annual activities, to apply systematic evaluations, nor are they obliged to prepare reports on the work carried out. Subsequently, an inquiry was conducted through interviews (22) with planning heads of the institutions and governing bodies to learn about the evaluation regulations, the formats and platforms used, inter-institutional link for evaluation and the scope of the mandatory nature of this function. The results suggest that the praxis of the units is dominated by the macro-institutional planning exercise that uses matrices and quantitative formats that record compliance but do not evaluate effects, changes, or impact of their activities, which reduces visibility of the public value provided by state sector, and to which is also added the work accomplish by the communication units. The true evaluation in the State is limited to a few government projects registered within the National Development Plan and not to a daily action in the entire state system. Some of the planning offices even indicate that neither planning, and even less evaluation, constitute a resource that is considered as strategic, conversely, they are seen more as an operational, compliance and organization resource, and for the different areas the filling of matrices and formats to record the execution of their tasks is an additional burden. In fact, one of the difficulties raised by these offices is the planning of their annual programs with objectives that can be evaluated, a position that is also recognized by the Contraloria General de la Republica (Comptroller General of the Republic), which indicates the absence, in a relevant percentage, of objectives in public institution programs. For the communication units, this set of practices produces inertia in the communicative action, little or no influence of the communication units in the institutional decision-making process, and an operational focus on execution, which reduces their strategic role. It is also clear that there is a predominance in the use of techniques and tools for reporting results in communication that does not correspond to evaluation, measurement is used with greater emphasis, and even in some cases the use of reportings which not apply to neither of the two processes.


2021 ◽  
Author(s):  
María Andrea Cruz Blandón ◽  
Alejandrina Cristia ◽  
Okko Räsänen

Computational models of child language development can help us understand the cognitive underpinnings of the language learning process. One advantage of computational modeling is that is has the potential to address multiple aspects of language learning within a single learning architecture. If successful, such integrated models would help to pave the way for a more comprehensive and mechanistic understanding of language development. However, in order to develop more accurate, holistic, and hence impactful models of infant language learning, the research on models also requires model evaluation practices that allow comparison of model behavior to empirical data from infants across a range of language capabilities. Moreover, there is a need for practices that can compare developmental trajectories of infants to those of models as a function of language experience. The present study aims to take the first steps to address these needs. More specifically, we will introduce the concept of comparing models with large-scale cumulative empirical data from infants, as quantified by meta-analyses conducted across a large number of individual behavioral studies. We start by formalizing the connection between measurable model and human behavior, and then present a basic conceptual framework for meta-analytic evaluation of computational models together with basic guidelines intended as a starting point for later work in this direction. We exemplify the meta-analytic model evaluation approach with two modeling experiments on infant-directed speech preference and native/non-native vowel discrimination. We also discuss the advantages, challenges, and potential future directions of meta-analytic evaluation practices.


Author(s):  
Jan Thomas Meyer ◽  
Roger Gassert ◽  
Olivier Lambercy

Abstract Background User-centered design approaches have gained attention over the past decade, aiming to tackle the technology acceptance issues of wearable robotic devices to assist, support or augment human capabilities. While there is a consensus that usability is key to user-centered design, dedicated usability evaluation studies are scarce and clear evaluation guidelines are missing. However, the careful consideration and integration of user needs appears to be essential to successfully develop an effective, efficient, and satisfactory human-robot interaction. It is primarily the responsibility of the developer, to ensure that this users involvement takes place throughout the design process. Methods Through an online survey for developers of wearable robotics, we wanted to understand how the design and evaluation in actual daily practice compares to what is reported in literature. With a total of 31 questions, we analyzed the most common wearable robotic device applications and their technology maturity, and how these influence usability evaluation practices. Results A total of 158 responses from a heterogeneous population were collected and analyzed. The dataset representing contexts of use for augmentation (16.5%), assistance (38.0%), therapy (39.8%), as well as few other specific applications (5.7%), allowed for an insightful analysis of the influence of technology maturity on user involvement and usability evaluation. We identified functionality, ease of use, and performance as the most evaluated usability attributes and could specify which measures are used to assess them. Also, we could underline the frequent use of qualitative measures alongside the expected high prevalence of performance-metrics. In conclusion of the analysis, we derived evaluation recommendations to foster user-centered design and usability evaluation. Conclusion This analysis might serve as state-of-the-art comparison and recommendation for usability studies in wearable robotics. We believe that by motivating for more balanced, comparable and user-oriented evaluation practices, we may support the wearable robotics field in tackling the technology acceptance limitations.


2021 ◽  
Vol 17 (37) ◽  
pp. 39
Author(s):  
John Gatimu ◽  
Christopher Gakuu ◽  
Anne Ndiritu

The study sought to establish the relationship between monitoring and evaluation practices and performance of County Maternal Health programmes in Kenya. The combined monitoring and evaluation practices included planning for M&E, stakeholder engagement, capacity building for M&E, and M&E data use. The study adopted a descriptive survey research design. To obtain 282 respondents, stratified random sampling was used. A self-administered structured questionnaire was the study's research instrument. Using descriptive narratives, qualitative data was analyzed within specific themes. Quantitative data was analyzed descriptively using measures of central tendencies and measures of dispersion. Regression was conducted for testing the study hypotheses. Data was presented using frequency tables. The study found that stakeholders’ engagement in M&E and capacity building for M&E influenced the performance of County Maternal Health Programmes in Kenya. The study also found that the respondents agreed that planning for M&E and the data management for M&E. This implied that the combined monitoring and evaluation practices influence performance of County Maternal Health Programmes in Kenya.The study found a strong correlation between the performance of county maternal health programmes and combined monitoring and evaluation practices. The study concluded that combined planning for M&E monitoring and evaluation practices influenced the performance of county maternal health programmes. The study suggests that management develop an effective methodology as well as raise awareness of M&E activities for the success of the project. The study also suggests that human resources issues such as workers charged with monitoring and evaluation ought to have technical capabilities, and roles and duties of monitoring and evaluation personnel should be outlined at the start of projects. To ensure M&E sustainability health sector reforms, investments in strong and vibrant technical harmonization platforms that can sustain the change agenda at all times and every required level.


2021 ◽  
Author(s):  
Max Leckert

AbstractThis article comparatively analyzes two manifestos in the field of quantitative science evaluation, the Altmetrics Manifesto (AM) and the Leiden Manifesto (LM). It employs perspectives from the Sociology of (E-) Valuation to make sense of highly visible critiques that organize the current discourse. Four motifs can be reconstructed from the manifestos’ valuation strategies. The AM criticizes the confinedness of established evaluation practices and pledges for an expansion of quantitative research evaluation. The LM denounces the proliferation of ill-applied research metrics and calls for an enclosure of metric research assessment. It can be shown that these motifs are organized diametrically: The two manifestos represent opposed positions in a critical discourse on (e-) valuative metrics. They manifest quantitative science evaluation as a contested field.


Sign in / Sign up

Export Citation Format

Share Document