Fit for measure? Evaluation in community development

Author(s):  
Oonagh Mc Ardle ◽  
Una Murray

Abstract Community development is a process where people concerned with social and environmental justice act together as engaged and active citizens1 to change their collective circumstances. A concern to deliver change through these processes raises the questions: How do we know if our work is effective? What do we mean by and how do we assess outcomes? What ‘evidence’ will help us articulate and improve how we do our work? Over the past decades, there has been a growing trend globally towards evaluation to understand and improve practice. Nevertheless, there is a lack of clarity about, and application of, appropriate frameworks for community development evaluation (Motherway, 2006; CDF, 2010; Whelan et al., 2019). Drawing on current practice, this paper explores challenges, principles and methods for evaluation in community development. We argue that ‘measurement’ requires a clear understanding and agreement of community development purpose and processes, including recognition of community as within and beyond place. Drawing on international evaluation criteria and models, we conclude that community development work could learn from these, as long as communities are in central decision-making roles. We offer suggestions for principles to inform evaluation efforts in community development, suggesting that good community development processes and associated outcomes represent in themselves a theory of change.

2019 ◽  
Vol 26 (13) ◽  
pp. 2330-2355 ◽  
Author(s):  
Anutthaman Parthasarathy ◽  
Sasikala K. Anandamma ◽  
Karunakaran A. Kalesh

Peptide therapeutics has made tremendous progress in the past decade. Many of the inherent weaknesses of peptides which hampered their development as therapeutics are now more or less effectively tackled with recent scientific and technological advancements in integrated drug discovery settings. These include recent developments in synthetic organic chemistry, high-throughput recombinant production strategies, highresolution analytical methods, high-throughput screening options, ingenious drug delivery strategies and novel formulation preparations. Here, we will briefly describe the key methodologies and strategies used in the therapeutic peptide development processes with selected examples of the most recent developments in the field. The aim of this review is to highlight the viable options a medicinal chemist may consider in order to improve a specific pharmacological property of interest in a peptide lead entity and thereby rationally assess the therapeutic potential this class of molecules possesses while they are traditionally (and incorrectly) considered ‘undruggable’.


2018 ◽  
Vol 14 (4) ◽  
pp. 734-747 ◽  
Author(s):  
Constance de Saint Laurent

There has been much hype, over the past few years, about the recent progress of artificial intelligence (AI), especially through machine learning. If one is to believe many of the headlines that have proliferated in the media, as well as in an increasing number of scientific publications, it would seem that AI is now capable of creating and learning in ways that are starting to resemble what humans can do. And so that we should start to hope – or fear – that the creation of fully cognisant machine might be something we will witness in our life time. However, much of these beliefs are based on deep misconceptions about what AI can do, and how. In this paper, I start with a brief introduction to the principles of AI, machine learning, and neural networks, primarily intended for psychologists and social scientists, who often have much to contribute to the debates surrounding AI but lack a clear understanding of what it can currently do and how it works. I then debunk four common myths associated with AI: 1) it can create, 2) it can learn, 3) it is neutral and objective, and 4) it can solve ethically and/or culturally sensitive problems. In a third and last section, I argue that these misconceptions represent four main dangers: 1) avoiding debate, 2) naturalising our biases, 3) deresponsibilising creators and users, and 4) missing out some of the potential uses of machine learning. I finally conclude on the potential benefits of using machine learning in research, and thus on the need to defend machine learning without romanticising what it can actually do.


2012 ◽  
Vol 51 (02) ◽  
pp. 104-111 ◽  
Author(s):  
J. Talmon ◽  
E. Ammenwerth ◽  
J. Brender ◽  
M. Rigby ◽  
P. Nykanen ◽  
...  

SummaryBackground: We previously devised and published a guideline for reporting health informatics evaluation studies named STARE-HI, which is formally endorsed by IMIA and EFMI.Objective: To develop a prioritization framework of ranked reporting items to assist authors when reporting health informatics evaluation studies in space restricted conference papers and to apply this prioritization framework to measure the quality of recent health informatics conference papers on evaluation studies.Method: We deconstructed the STARE-HI guideline to identify reporting items. We invited a total of 111 authors of health informatics evaluation studies, reviewers and editors of health Informatics conference proceedings to score those reporting items on a scale ranging from “0 – not necessary in a conference paper” through to “10 – essential in a conference paper” by a web-based survey. From the responses we derived a mean priority score. All evaluation papers published in proceedings of MIE2006, Medinfo2007, MIE2008 and AMIA2008 were rated on these items by two reviewers. From these ratings a priority adjusted completeness score was computed for each paper.Results: We identified 104 reporting items from the STARE-HI guideline. The response rate for the survey was 59% (66 out of 111). The most important reporting items (mean score ≥ 9) were “Interpret the data and give an answer to the study question – (in Discussion)”, “Whether it is a laboratory, simulation or field study – (in Methods-study design)” and “Description of the outcome measure/evaluation criteria – (in Methods-study design)”. Per reporting area the statistically more significant important reporting items were distinguished from less important ones. Four reporting items had a mean score ≤ 6. The mean priority adjusted completeness of evaluation papers of recent health informatics conferences was 48% (range 14 –78%).Conclusion: We produced a ranked list of reporting items from STARE-HI according to their prioritized relevance for inclusion in space-limited conference papers. The priority adjusted completeness scores demonstrated room for improvement for the analyzed conference papers. We believe that this prioritization framework is an aid to improving the quality and utility of conference papers on health informatics evaluation studies.


2018 ◽  
Vol 11 (18) ◽  
pp. 153-180
Author(s):  
Zbigniew Jurczyk

The paper aims at showing the influence and the views espoused by economic theories and schools of economics on competition policy embedded in antitrust law and conducted by competition authorities in the field of vertical agreements. The scope of the paper demonstrates how substantially the economization of antitrust law has changed the assessment as to the harmfulness of vertical agreements. The analysis of economic aspects of vertical agreements in antitrust analysis allows one to reveal their pro-competitive effects and benefits, with the consumer being their beneficiary. The basic instrument of the said economization is that antitrust bodies draw on specific economic models and theories that can be employed in their practice. Within the scope of the paper, the author synthesizes the role and influence of those models and schools of economics on the application of competition law in the context of vertical agreements. In presenting, one after another, the theories and schools of economics which used to, or are still dealing with competition policy the author emphasises that in its nature this impact was more or less direct. Some of them remain at the level of general principals and axiology of competition policy, while others, in contrast, delineate concrete evaluation criteria and show how the application of those criteria changes the picture of anti-competitive practices; in other words, why vertical agreements, which in the past used to be considered to restrain competition, are no longer perceived as such. The paper presents the models and recommendations of neoclassical economics, the Harvard School, the Chicago and Post-Chicago School, the ordoliberal school, the Austrian and neoAustrian school as well as the transaction cost theory.


Author(s):  
Anna Harris ◽  
John Nott

This paper explores the material histories which influence contemporary medical education. Using two obstetric simulators found in the distinct teaching environments of the University of Development Studies in the north of Ghana and Maastricht University in the south of the Netherlands, this paper deconstructs the material conditions which shape current practice in order to emphasise the past practices that remain relevant, yet often invisible, in modern medicine. Building on conceptual ideas drawn from STS and the productive tensions which emerge from close collaboration between historians and anthropologists, we argue that the pull of past practice can be understood as a form of friction, where historical practices ‘stick’ to modern materialities. We argue that the labour required for the translation of material conditions across both time and space is expressly relevant for the ongoing use and future development of medical technologies.


1996 ◽  
Vol 49 (1) ◽  
pp. 112-119 ◽  
Author(s):  
G. G. Bennett

About ten years ago this author wrote the software for a suite of navigation programmes which was resident in a small hand-held computer. In the course of this work it became apparent that the standard text books of navigation were perpetuating a flawed method of calculating rhumb lines on the Earth considered as an oblate spheroid. On further investigation it became apparent that these incorrect methods were being used in programming a number of calculator/computers and satellite navigation receivers. Although the discrepancies were not large, it was disquieting to compare the results of the same rhumb line calculations from a number of such devices and find variations of some miles when the output was given, and therefore purported to be accurate, to a tenth of a mile in distance and/or a tenth of a minute of arc in position. The problem has been highlighted in the past and the references at the end of this show that a number of methods have been proposed for the amelioration of this problem. This paper summarizes formulae that the author recommends should be used for accurate solutions. Most of these may be found in standard geodetic text books, such as, but also provided are new formulae and schemes of solution which are suitable for use with computers or tables. The latter also take into account situations when a near-indeterminate solution may arise. Some examples are provided in an appendix which demonstrate the methods. The data for these problems do not refer to actual terrestrial situations but have been selected for illustrative purposes only. Practising ships' navigators will find the methods described in detail in this paper to be directly applicable to their work and also they should find ready acceptance because they are similar to current practice. In none of the references cited at the end of this paper has the practical task of calculating, using either a computer or tabular techniques, been addressed.


2011 ◽  
pp. 216-234 ◽  
Author(s):  
Roger W. Caves

The use of ICTs in community development areas has increased over the past 10 years. This chapter examines how the “Smart Community” concept can help areas of various sizes accomplish a variety of local and regional development processes. The chapter covers such issues the role of citizen participation, the roles of information technologies, the components of a “Smart Community”, the California Smart Communities Program, and the lessons learned to date from the program. The chapter concludes with a discussion of the “digital divide” between people with access to various ICTs and those without access any access to ICTs.


2018 ◽  
Vol 170 ◽  
pp. 01132
Author(s):  
Andrey Gorokhov ◽  
Alexey Ignatyev ◽  
Vitaly Smirnov

The purpose of the study is to develop a potential mechanism for monitoring and motivating municipal authorities, based on the evolution of management of development processes. The paper describes the positive experience of management of local development processes on the example of the Bagrationovsky urban settlement, whose administration actively interacted with the pharmaceutical company “Infamed-K” located in Bagrationovsk. As a result, it was possible not only to ensure the participation of the settlement in various regional and federal programs on co-financing terms, fully pay off the past due debt that has occurred earlier, but also create a favorable living environment and solve many problems of the residents.


2020 ◽  
Vol 47 (3) ◽  
pp. 380-390 ◽  
Author(s):  
Nina Wallerstein ◽  
John G. Oetzel ◽  
Shannon Sanchez-Youngman ◽  
Blake Boursaw ◽  
Elizabeth Dickson ◽  
...  

Community-based participatory research (CBPR) and community-engaged research have been established in the past 25 years as valued research approaches within health education, public health, and other health and social sciences for their effectiveness in reducing inequities. While early literature focused on partnering principles and processes, within the past decade, individual studies, as well as systematic reviews, have increasingly documented outcomes in community support and empowerment, sustained partnerships, healthier behaviors, policy changes, and health improvements. Despite enhanced focus on research and health outcomes, the science lags behind the practice. CBPR partnering pathways that result in outcomes remain little understood, with few studies documenting best practices. Since 2006, the University of New Mexico Center for Participatory Research with the University of Washington’s Indigenous Wellness Research Institute and partners across the country has engaged in targeted investigations to fill this gap in the science. Our inquiry, spanning three stages of National Institutes of Health funding, has sought to identify which partnering practices, under which contexts and conditions, have capacity to contribute to health, research, and community outcomes. This article presents the research design of our current grant, Engage for Equity, including its history, social justice principles, theoretical bases, measures, intervention tools and resources, and preliminary findings about collective empowerment as our middle range theory of change. We end with lessons learned and recommendations for partnerships to engage in collective reflexive practice to strengthen internal power-sharing and capacity to reach health and social equity outcomes.


Author(s):  
Alexandre Gachet ◽  
Ralph Sprague

Finding appropriate decision support systems (DSS) development processes and methodologies is a topic that has kept researchers in the decision support community busy for the past three decades at least. Inspired by Gibson and Nolan’s curve (Gibson & Nolan 1974; Nolan, 1979), it is fair to contend that the field of DSS development is reaching the end of its expansion (or contagion) stage, which is characterized by the proliferation of processes and methodologies in all areas of decision support. Studies on DSS development conducted during the last 15 years (e.g., Arinze, 1991; Saxena, 1992) have identified more than 30 different approaches to the design and construction of decision support methods and systems (Marakas, 2003). Interestingly enough, none of these approaches predominate and the various DSS development processes usually remain very distinct and project-specific. This situation can be interpreted as a sign that the field of DSS development should soon enter in its formalization (or control) stage. Therefore, we propose a unifying perspective of DSS development based on the notion of context. In this article, we argue that the context of the target DSS (whether organizational, technological, or developmental) is not properly considered in the literature on DSS development. Researchers propose processes (e.g., Courbon, Drageof, & Tomasi, 1979; Stabell 1983), methodologies (e.g., Blanning, 1979; Martin, 1982; Saxena, 1991; Sprague & Carlson, 1982), cycles (e.g., Keen & Scott Morton, 1978; Sage, 1991), guidelines (e.g., for end-user computer), and frameworks, but often fail to explicitly describe the context in which the solution can be applied.


Sign in / Sign up

Export Citation Format

Share Document