scholarly journals Peer Review #2 of "Evaluation of the validity of the Psychology Experiment Building Language tests of vigilance, auditory memory, and decision making (v0.1)"

Author(s):  
C Brydges
2015 ◽  
Author(s):  
Brian Piper ◽  
Shane T Mueller ◽  
Sara Talebzadeh ◽  
Min Jung Ki

Background. The Psychology Experimental Building Language (PEBL) http://pebl.sourceforge.net/ test battery is a popular application for neurobehavioral investigations. This study evaluated the correspondence between the PEBL and the non-PEBL versions of four executive function tests. Methods. In one cohort, young-adults (N = 44) completed both the Conner’s Continuous Performance Test (CCPT) and the PEBL CPT (PCPT) with the order counter-balanced. In a second cohort, participants (N = 47) completed a non-computerized (Wechsler) and a computerized (PEBL) Digit Span (WDS or PDS) both Forward and Backward. Participants also completed the Psychological Assessment Resources or the PEBL versions of the Iowa Gambling Task (PARIGT or PEBLIGT). Results. The between test correlations were moderately high (reaction time r = 0.78, omission errors r = 0.65, commission errors r = 0.66) on the CPT. DS Forward was significantly greater than DS Backward independent of the test modality. The total WDS score was moderately correlated with the PDS (r = 0.56). The PARIGT and the PEBLIGTs showed a very similar pattern for response times across blocks, development of preference for Advantageous over Disadvantageous Decks, and Deck selections. However, the amount of money earned (score – loan) was significantly higher in the PEBLIGT during the last Block. Conclusions. These findings are broadly supportive of the criterion validity of the PEBL measures of sustained attention, short-term memory, and decision making. Select differences between workalike versions of the same test highlight how detailed aspects of implementation may have more important consequences for computerized testing than has been previously acknowledged.


PeerJ ◽  
2016 ◽  
Vol 4 ◽  
pp. e1772 ◽  
Author(s):  
Brian Piper ◽  
Shane T. Mueller ◽  
Sara Talebzadeh ◽  
Min Jung Ki

Background.The Psychology Experimental Building Language (PEBL) test battery (http://pebl.sourceforge.net/) is a popular application for neurobehavioral investigations. This study evaluated the correspondence between the PEBL and the non-PEBL versions of four executive function tests.Methods.In one cohort, young-adults (N= 44) completed both the Conner’s Continuous Performance Test (CCPT) and the PEBL CPT (PCPT) with the order counter-balanced. In a second cohort, participants (N= 47) completed a non-computerized (Wechsler) and a computerized (PEBL) Digit Span (WDS orPDS) both Forward and Backward. Participants also completed the Psychological Assessment Resources or the PEBL versions of the Iowa Gambling Task (PARIGT orPEBLIGT).Results. The between-test correlations were moderately high (reaction timer= 0.78, omission errorsr= 0.65, commission errorsr= 0.66) on the CPT. DS Forward was significantly greater than DS Backward on theWDS (p< .0005) and thePDS (p< .0005). The totalWDS score was moderately correlated with thePDS (r= 0.56). ThePARIGT and thePEBLIGTs showed a very similar pattern for response times across blocks, development of preference for Advantageous over Disadvantageous Decks, and Deck selections. However, the amount of money earned (score–loan) was significantly higher in thePEBLIGT during the last Block.Conclusions. These findings are broadly supportive of the criterion validity of the PEBL measures of sustained attention, short-term memory, and decision making. Select differences between workalike versions of the same test highlight how detailed aspects of implementation may have more important consequences for computerized testing than has been previously acknowledged.


2015 ◽  
Author(s):  
Brian Piper ◽  
Shane T Mueller ◽  
Sara Talebzadeh ◽  
Min Jung Ki

Background. The Psychology Experimental Building Language (PEBL) http://pebl.sourceforge.net/ test battery is a popular application for neurobehavioral investigations. This study evaluated the correspondence between the PEBL and the non-PEBL versions of four executive function tests. Methods. In one cohort, young-adults (N = 44) completed both the Conner’s Continuous Performance Test (CCPT) and the PEBL CPT (PCPT) with the order counter-balanced. In a second cohort, participants (N = 47) completed a non-computerized (Wechsler) and a computerized (PEBL) Digit Span (WDS or PDS) both Forward and Backward. Participants also completed the Psychological Assessment Resources or the PEBL versions of the Iowa Gambling Task (PARIGT or PEBLIGT). Results. The between test correlations were moderately high (reaction time r = 0.78, omission errors r = 0.65, commission errors r = 0.66) on the CPT. DS Forward was significantly greater than DS Backward independent of the test modality. The total WDS score was moderately correlated with the PDS (r = 0.56). The PARIGT and the PEBLIGTs showed a very similar pattern for response times across blocks, development of preference for Advantageous over Disadvantageous Decks, and Deck selections. However, the amount of money earned (score – loan) was significantly higher in the PEBLIGT during the last Block. Conclusions. These findings are broadly supportive of the criterion validity of the PEBL measures of sustained attention, short-term memory, and decision making. Select differences between workalike versions of the same test highlight how detailed aspects of implementation may have more important consequences for computerized testing than has been previously acknowledged.


2021 ◽  
Vol 42 (02) ◽  
pp. 191-195

Good reviewers are essential to the success of any journal and peer review is a major pillar of science. We are grateful to those mentioned below to have dedicated their time and expertise to help our authors improve and refine their manuscripts and support the Editor(s) in the decision making process in the past year.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Meng Ee Wong ◽  
YingMin Lee

PurposeThis study explored in-service educators' experience of using the Wisconsin Assistive Technology Initiative (WATI) for assistive technology (AT) decision-making within Singapore schools.Design/methodology/approachThe study adopted a qualitative design. Eight educators across both mainstream and special education schools were introduced to the WATI framework which they subsequently employed as a trial experience for a student under their care. Written feedback gathered from participants was analysed to identify common issues and themes regarding the use of the WATI framework for AT decision-making.FindingsThe comprehensive consideration of a broad scope of different factors, provision of a structured process for AT decision-making, as well as a common language for use by different stakeholders emerged as key benefits of implementing the WATI. Challenges encountered include administrative struggles in gathering different stakeholders together, time and resource constraints and difficulties in loaning AT devices for trial use.Practical implicationsBased on educators' feedback, recommendations to facilitate the adoption of the WATI for AT decision-making within Singapore schools are discussed and considered. This study also highlights the need for greater AT instruction within both preservice and in-service teacher preparation programmes in Singapore.Originality/valueSchools in Singapore currently rarely adopt any frameworks in place to guide educators through a systematic process of AT consideration. It is anticipated that this study will spearhead and drive the adoption of systematic frameworks such as the WATI for better AT decision-making within Singapore schools.Peer reviewThe peer review history for this article is available at: https://publons.com/publon 10.1108/JET-03-2021-0015


Sign in / Sign up

Export Citation Format

Share Document