Evaluating approaches for the next generation of difficulty and complexity assessment tools

Author(s):  
Dean Beale ◽  
Theo Tryfonas ◽  
Michael Young
2018 ◽  
Vol 26 (3) ◽  
pp. 247-259 ◽  
Author(s):  
Nida’a K. AbuJbara ◽  
Jody A. Worley

Purpose This paper aims to highlight the importance of soft skills for leadership and offers recommendations for soft skill development training for the next generation of leaders. Design/methodology/approach An integrated review of current research literature was conducted on management, leadership and soft skills to develop recommendations for integrating the development of soft skills in leadership development training protocol. Findings A one-size-fits-all approach does not work for soft skills development or measurement. Each soft skill is defined differently and should be assessed based on different behavioral actions. Progress in this area of measurement development will make a great impact on the use of soft skills. The development of assessment tools for the different soft skills across professional disciplines is assumed to enhance other aspects of transformational leadership such as coaching and mentoring. Research limitations/implications Current strategies for the assessment and measurement of soft skills present an obstacle for including these skills in current leadership training models. Practical implications The paper includes implications for the development of soft skills for the next generation of leaders and offers recommendations for integrating the development of soft skills in leadership training programs. Originality/value This paper fulfills an identified need to study how soft skills can be measured and assessed. This is important given that specific skills vary across professional disciplines and organizational contexts.


2018 ◽  
Vol 12 (1) ◽  
pp. 70
Author(s):  
Micaela La Regina ◽  
Roberto Nardi ◽  
Andrea Fontanella

Not available


2016 ◽  
Vol 7 (4) ◽  
Author(s):  
Paul C Langley

The enthusiasm with which precision medicine has been embraced over the past 15 years has obscured the fact that the evidence base for biomarker-driven assessments, in particular for next generation sequencing (NGS), is limited. This applies both to the comparative performance of the various assessment tools as well as to the impact of biomarker driven decisions at the patient level. Where a genetic test is being evaluated there are five key questions a formulary committee should ask when assessing whether or not to recommend coverage and reimbursement for the test in target patient populations: (i) has the test met required standards for analytic and clinical validity? (ii) has the test been evaluated against competing tests for analytic and clinical validity? (iii) have the test-based claims met standards for credibility, evaluation and replication? (iv) has the test been accepted as part of the standard of care for patient management in the target disease state? (v) has the introduction of the test improved outcomes, including survivorship, adverse events, quality of life and costs, in the targeted population? The purpose of this commentary is twofold: first, to consider the appropriate evidentiary standards for the evaluation of a test and comparator tests; and, second, to identify questions that a formulary committee should address in submissions made for a test in health care systems. A critical issue is not only comparative claims for the test against the standard of care and comparator tests, but the assessment of test performance for the identified treatment pathways where mutations or variants are linked to recommendations for therapy options. Unless these issues are addressed it is unlikely that the promise of personalized medicine will be realized. The absence of an evidence base will deter both physicians and their patients from adopting NGS based recommendations.   Type: Commentary


2020 ◽  
Vol 12 (1) ◽  
pp. 471-487
Author(s):  
Karen Fisher-Vanden ◽  
John Weyant

In this review, we attempt to describe the evolution of integrated assessment modeling research since the pioneering work of William Nordhaus in 1994, highlighting a number of challenges and suggestions for moving the field forward. The field has evolved from global aggregate models focused on cost-benefit analysis to detailed process models used to generate emissions scenarios and to coupled model frameworks for impact analyses. The increased demand for higher sectoral, temporal, and spatial resolution to conduct impact analyses has led to a number of challenges both computationally and conceptually. Overcoming these challenges and moving the field forward will require not only greater efforts in model coupling software and translational tools, the incorporation of empirical findings into integrated assessment models, and intermethod comparisons but also the expansion and better coordination of multidisciplinary researchers in this field through better training of the next generation of integrated assessment scholars and expanding the community of practice.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Joe F. Hair Jr

PurposeThe purpose of this study is to provide an overview of emerging prediction assessment tools for composite-based PLS-SEM, particularly proposed out-of-sample prediction methodologies.Design/methodology/approachA review of recently developed out-of-sample prediction assessment tools for composite-based PLS-SEM that will expand the skills of researchers and inform them on new methodologies for improving evaluation of theoretical models. Recently developed and proposed cross-validation approaches for model comparisons and benchmarking are reviewed and evaluated.FindingsThe results summarize next-generation prediction metrics that will substantially improve researchers' ability to assess and report the extent to which their theoretical models provide meaningful predictions. Improved prediction assessment metrics are essential to justify (practical) implications and recommendations developed on the basis of theoretical model estimation results.Originality/valueThe paper provides an overview of recently developed and proposed out-of-sample prediction metrics for composite-based PLS-SEM that will enhance the ability of researchers to demonstrate generalization of their findings from sample data to the population.


2021 ◽  
Author(s):  
Valentyna Parashchuk ◽  
Laryssa Yarova ◽  
Stepan Parashchuk

Automated text complexity assessment tools are of enormous practical value in solving the time-consuming task of analyzing English informational texts for their complexity at the pre-reading stage. The present study depicts the application of the automated text analysis system the TextEvaluator as an effective tool that helps analyze texts on eight dimensions of text complexity as follows: syntactic complexity; academic vocabulary; word unfamiliarity; word concreteness; lexical cohesion; interactive style; level of argumentation; degree of narrativity, with further summarizing them with an overall genre-dependent complexity score. This research examines the complexity dimensions of English informational texts of four genres – legal, linguistic, news, and medical – that are used for teaching reading comprehension to EFL (English as a foreign language) pre-service teachers and translators at universities in Ukraine. The data obtained with the help of the TextEvaluator has shown that English legal texts are the most difficult for reading comprehension in comparison to linguistic, news, and medical texts. In contrast, medical texts are the least challenging out of the four genres compared. The TextEvaluator has provided insight into the complexity of English informational texts across their different genres that would be useful for assembling the corpora of reading passages scaled on specific dimensions of text complexity that predict text difficulty to EFL pre-service teachers and translators.


2021 ◽  
Vol 7 (1) ◽  
pp. 155-164
Author(s):  
Valentyna Parashchuk ◽  
Laryssa Yarova ◽  
Stepan Parashchuk

Automated text complexity assessment tools are of enormous practical value in solving the time-consuming task of analyzing English informational texts for their complexity at the pre-reading stage. The present study depicts the application of the automated text analysis system the TextEvaluator as an effective tool that helps analyze texts on eight dimensions of text complexity as follows: syntactic complexity; academic vocabulary; word unfamiliarity; word concreteness; lexical cohesion; interactive style; level of argumentation; degree of narrativity, with further summarizing them with an overall genre-dependent complexity score. This research examines the complexity dimensions of English informational texts of four genres – legal, linguistic, news, and medical – that are used for teaching reading comprehension to EFL (English as a foreign language) pre-service teachers and translators at universities in Ukraine. The data obtained with the help of the TextEvaluator has shown that English legal texts are the most difficult for reading comprehension in comparison to linguistic, news, and medical texts. In contrast, medical texts are the least challenging out of the four genres compared. The TextEvaluator has provided insight into the complexity of English informational texts across their different genres that would be useful for assembling the corpora of reading passages scaled on specific dimensions of text complexity that predict text difficulty to EFL pre-service teachers and translators.


1983 ◽  
Vol 14 (1) ◽  
pp. 7-21 ◽  
Author(s):  
Robert E. Owens ◽  
Martha J. Haney ◽  
Virginia E. Giesow ◽  
Lisa F. Dooley ◽  
Richard J. Kelly

This paper examines the test item content of several language assessment tools. A comparison of test breadth and depth is presented. The resultant information provides a diagnostic aid for school speech-language pathologists.


Sign in / Sign up

Export Citation Format

Share Document