Research and Practice Using the CAN: Evidence Review

2021 ◽  
Vol 30 (3) ◽  
pp. 355-380
Author(s):  
Ruth Lightbody ◽  
Oliver Escobar

In Scotland, innovative designs for community engagement have been developed by national and local governments, public authorities, and civil society organisations, leading to a wealth of literature and research. This evidence review of 79 articles and reports, explores the intersection between community engagement and inequality in Scotland. We find that the ways in which equality must be supported within community processes are often overlooked. Community engagement must be placed in the context of broader democratic innovation and citizenship at regional, national and global scale in order to become future proof. Appropriate resources are required to avoid replicating systemic inequalities as well as to support the development of a variety of institutions, processes and methods that cater for groups often mislabelled as ‘hard to reach’ but that are perhaps best seen as ‘easy to ignore’ ( Matthews et al. 2012 ). The paper highlights key learning and strategic considerations to inform practice in Scotland and beyond. The findings and recommendations are of relevance to reformers, innovators, researchers, practitioners and policymakers working across diverse policy areas and levels of governance.


Author(s):  
Stacey Springs ◽  
Jay Baruch

Despite the many studies that are published every day by research teams across the globe, decisions in healthcare policy, practice, and programming are often made in the absence of quality, relevant evidence. This evidence gap between research and practice highlights the need to improve translation of what is known into what is done. Evidence synthesis employs rigorous and replicable techniques to bridge this gap, to better understand the quality and quantity of relevant evidence that informs decision making. In this chapter, the authors provide an overview of evidence synthesis in health humanities and describe the necessary process steps to conduct evidence synthesis projects in a rigorous and reproducible manner. This chapter intends to provide readers with the requisite knowledge to become informed consumers of evidence synthesis products and familiar with the basic steps necessary to complete an evidence review.


Crisis ◽  
2015 ◽  
Vol 36 (6) ◽  
pp. 459-463
Author(s):  
Kate Monaghan ◽  
Martin Harris

Abstract. Background: Suicide is a pervasive and complex issue that can challenge counselors through the course of their careers. Research and practice focus heavily on crisis management and imminent risk rather than early intervention strategies. Early intervention strategies can assist counselors working with clients who have suicidal ideation, but are not at imminent risk, or with clients whose risk factors identify them as having a stronger trajectory for suicidal ideation. Aims: This systematic literature review examines the current literature on working with clients with suicidal ideation who are not at imminent risk, to ascertain the types of information and strategies available to counselors working with this client group. Method: An initial 622 articles were identified for analysis and from these 24 were included in the final review, which was synthesized using a narrative approach. Results: Results indicate that research into early intervention strategies is extremely limited. Conclusion: It was possible to describe emergent themes and practice guidelines to assist counselors working with clients with suicidal ideation but not at imminent risk.


2002 ◽  
Vol 18 (1) ◽  
pp. 52-62 ◽  
Author(s):  
Olga F. Voskuijl ◽  
Tjarda van Sliedregt

Summary: This paper presents a meta-analysis of published job analysis interrater reliability data in order to predict the expected levels of interrater reliability within specific combinations of moderators, such as rater source, experience of the rater, and type of job descriptive information. The overall mean interrater reliability of 91 reliability coefficients reported in the literature was .59. The results of experienced professionals (job analysts) showed the highest reliability coefficients (.76). The method of data collection (job contact versus job description) only affected the results of experienced job analysts. For this group higher interrater reliability coefficients were obtained for analyses based on job contact (.87) than for those based on job descriptions (.71). For other rater categories (e.g., students, organization members) neither the method of data collection nor training had a significant effect on the interrater reliability. Analyses based on scales with defined levels resulted in significantly higher interrater reliability coefficients than analyses based on scales with undefined levels. Behavior and job worth dimensions were rated more reliable (.62 and .60, respectively) than attributes and tasks (.49 and .29, respectively). Furthermore, the results indicated that if nonprofessional raters are used (e.g., incumbents or students), at least two to four raters are required to obtain a reliability coefficient of .80. These findings have implications for research and practice.


Sign in / Sign up

Export Citation Format

Share Document