The research and practice of blasting on demand design of quarry

Author(s):  
Boshen Zhao ◽  
Hongze Zhao ◽  
Duojin Wu ◽  
Jiandong Sun
2017 ◽  
Vol 10 (4) ◽  
pp. 687-696 ◽  
Author(s):  
Alice M. Brawley

I am concerned about industrial and organizational (I-O) psychology's relevance to the gig economy, defined here as the broad trends toward technology-based platform work. This sort of work happens on apps like Uber (where the app connects drivers and riders) and sites like MTurk (where human intelligence tasks, or HITs, are advertised to workers on behalf of requesters). We carry on with I-O research and practice as if technology comprises only things (e.g., phones, websites, platforms) that we use to assess applicants and complete work. However, technology has much more radically restructured work as we know it, to happen in a much more piecemeal, on-demand fashion, reviving debates about worker classification and changing the reality of work for many workers (Sundararajan, 2016). Instead of studying technology as a thing we use, it's critical that we “zoom out” to see and adapt our field to this bigger picture of trends towards a gig economy. Rather than a phone being used to check work email or complete pre-hire assessments, technology and work are inseparable. For example, working on MTurk requires constant Internet access (Brawley, Pury, Switzer, & Saylors, 2017; Ma, Khansa, & Hou, 2016). Alarmingly, some researchers describe these workers as precarious (Spretizer, Cameron, & Garrett, 2017), dependent on an extremely flexible (a label that is perhaps euphemistic for unreliable) source of work. Although it's unlikely that all workers consider their “gig” a full time job or otherwise necessary income, at least some workers do: An estimated 10–40% of MTurk workers consider themselves serious gig workers (Brawley & Pury, 2016). Total numbers for the broader gig economy are only growing, with recent tax-based estimates including 34% of the US workforce now and up to 43% within 3 years (Gillespie, 2017). It appears we're seeing some trends in work reverse and return to piece work (e.g., a ride on Uber, a HIT on MTurk) as if we've simply digitized the assembly line (Davis, 2016). Over time, these trends could accelerate, and we could potentially see total elimination of work (Morrison, 2017).


Crisis ◽  
2015 ◽  
Vol 36 (6) ◽  
pp. 459-463
Author(s):  
Kate Monaghan ◽  
Martin Harris

Abstract. Background: Suicide is a pervasive and complex issue that can challenge counselors through the course of their careers. Research and practice focus heavily on crisis management and imminent risk rather than early intervention strategies. Early intervention strategies can assist counselors working with clients who have suicidal ideation, but are not at imminent risk, or with clients whose risk factors identify them as having a stronger trajectory for suicidal ideation. Aims: This systematic literature review examines the current literature on working with clients with suicidal ideation who are not at imminent risk, to ascertain the types of information and strategies available to counselors working with this client group. Method: An initial 622 articles were identified for analysis and from these 24 were included in the final review, which was synthesized using a narrative approach. Results: Results indicate that research into early intervention strategies is extremely limited. Conclusion: It was possible to describe emergent themes and practice guidelines to assist counselors working with clients with suicidal ideation but not at imminent risk.


2002 ◽  
Vol 18 (1) ◽  
pp. 52-62 ◽  
Author(s):  
Olga F. Voskuijl ◽  
Tjarda van Sliedregt

Summary: This paper presents a meta-analysis of published job analysis interrater reliability data in order to predict the expected levels of interrater reliability within specific combinations of moderators, such as rater source, experience of the rater, and type of job descriptive information. The overall mean interrater reliability of 91 reliability coefficients reported in the literature was .59. The results of experienced professionals (job analysts) showed the highest reliability coefficients (.76). The method of data collection (job contact versus job description) only affected the results of experienced job analysts. For this group higher interrater reliability coefficients were obtained for analyses based on job contact (.87) than for those based on job descriptions (.71). For other rater categories (e.g., students, organization members) neither the method of data collection nor training had a significant effect on the interrater reliability. Analyses based on scales with defined levels resulted in significantly higher interrater reliability coefficients than analyses based on scales with undefined levels. Behavior and job worth dimensions were rated more reliable (.62 and .60, respectively) than attributes and tasks (.49 and .29, respectively). Furthermore, the results indicated that if nonprofessional raters are used (e.g., incumbents or students), at least two to four raters are required to obtain a reliability coefficient of .80. These findings have implications for research and practice.


Sign in / Sign up

Export Citation Format

Share Document