The Role of Technology in Clinical Neuropsychology
Latest Publications


TOTAL DOCUMENTS

17
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Published By Oxford University Press

9780190234737, 9780197559543

Author(s):  
Thomas F. Quatieri ◽  
James R. Williamson

Multimodal biomarkers based on behavioral, neurophysiological, and cognitive measurements have recently increased in popularity for the detection of cognitive stress and neurologically based disorders. Such conditions significantly and adversely affect human performance and quality of life in a large fraction of the world’s population. Example modalities used in detection of these conditions include speech, facial expression, physiology, eye tracking, gait, and electroencephalography (EEG). Toward the goal of finding simple, noninvasive means to detect, predict, and monitor cognitive stress and neurological conditions, MIT Lincoln Laboratory is developing biomarkers that satisfy three criteria. First, we seek biomarkers that reflect core components of cognitive status, such as work­ing memory capacity, processing speed, attention, and arousal. Second, and as importantly, we seek biomarkers that reflect timing and coordination relations both within components of each modality and across different modalities. This is based on the hypothesis that neural coordination across different parts of the brain is essential in cognition. An example of timing and coordination within a modality is the set of finely timed and synchronized physiological components of speech production, whereas an example of coordination across modalities is the timing and synchrony that occur between speech and facial expression during speaking. Third, we seek multimodal biomarkers that contribute in a complementary fashion under various channel and background conditions. In this chapter, as an illustration of the biomarker approach, we focus on cognitive stress and the particular case of detecting different cognitive load levels. We also briefly show how similar feature-extraction principles can be applied to a neurological condition through the example of major depressive disorder (MDD). MDD is one of several neuropsychiatric disorders where multimodal biomarkers based on principles of timing and coordination are important for detection (Cummins et al., 2015; Helfer et al., 2014; Quatieri & Malyska, 2012; Trevino, Quatieri, & Malyska, 2011; Williamson, Quatieri, Helfer, Ciccarelli, & Mehta, 2014; Williamson et al., 2013, 2015; Yu, Quatieri, Williamson, & Mundt, 2014).


Author(s):  
Erin D. Bigler

All traditional neuropsychological assessment techniques emerged in an era prior to modern neuroimaging. In fact, question-answer/paper-and-pencil test origins that gained traction with Alfred Binet in 1905 remain the same core techniques today. Indeed, Binet’s efforts began the era of standardized human metrics designed to assess a broad spectrum of cognitive, emotional, and behavioral functions and abilities. During the early part of the 20th century, the concept of an intellectual quotient expressed as a standard score with a mean of 100 and a standard deviation of 15 also initiated the era of quantitative descriptions of mental and emotional functioning (Anastasi, 1968; Stern, 1912). Other descriptive statistical metrics were applied to human measurement, including scaled, percentile, T-score, and z-score statistics. Statistical measures became part of the assessment lexicon and each possessed strength as well as weakness for descriptive purposes, but together proved to be immensely effective for communicating test findings and inferring average and above or below the norm performances. In turn, descriptive statistical methods became the cornerstone for describing neuropsychological findings, typically reported by domain of functioning (memory, excutive, language, etc.; Cipolotti & Warrington, 1995; Lezak, Howieson, Bigler, & Tranel, 2012). As much as psychology and medicine have incorporated descriptive statistics into research and clinical application, a major focus of both disciplines also has been binary classification—normal versus abnormal. This dichotomization recognizes some variability and individual differences within a test score or laboratory procedure, but at some point the clinician makes the binary decision of normal or abnormal. In the beginnings of neuroimaging, which are discussed more thoroughly below, interpretation of computed tomographic (CT) or magnetic resonance imaging (MRI) scans mostly was approached in this manner. Although lots of information was available from CT and MRI images, if nothing obviously abnormal was seen, the radiological conclusion merely stated in the Impression section, “Normal CT (or MRI) of the brain,” with no other qualification (or quantification) of why the findings were deemed normal other than the image appeared that way. Until recently, quantification of information in an image required hand editing and was excruciatingly time consuming.


Author(s):  
Anthony J.-W. Chen ◽  
Fred Loya

In an instant, a brain injury can cause changes that affect a person for a life­time. Although traumatic brain injury (TBI) can result in almost any neurological deficit, the most common and persistent deficits tend to affect neurocognitive functioning. Functional issues may produce a tremendous chronic burden on individuals, families, and healthcare systems (Thurman, Alverson, Dunn, Guerrero, & Sniezek, 1999; Yu et al., 2003). The far-reaching impact of these seemingly “invisible” deficits is often not recognized. Individuals who have suffered a TBI may also be at increased risk for developing cognitive changes later in life (Mauri et al., 2006; Schwartz, 2009; Van Den Heuvel, Thornton, & Vink, 2007). Military veterans report even higher rates of persistent issues, especially in the context of posttraumatic stress (PTS) (Polusny et al., 2011). Despite their importance, chronic neurocognitive dysfunctions are often poorly addressed. A long-term view on care-oriented research and development is needed (Chen & D’Esposito, 2010). Even as we get deeper into the 21st century, there continue to be many gaps in the rehabilitation of neurocognitive functioning after brain injury. There is a need for increased effort to advance rehabilitation care and delivery. There are two major gaps in care that could benefit from neuroscience research and technology-assisted intervention development. First, there remains a major need for theory-driven approaches to cognitive training, accompanied by the development of innovative tools to support learning of useful skills and their generalization to help achieve real-life goals. Second, major gaps in the delivery and coordination of rehabilitation must be addressed in order to provide care to the many people with brain injury who lack access to services due to barriers imposed by distance, financial constraints, and disability. This chapter introduces and illustrates some technology-assisted innovations that may help to advance neurocognitive rehabilitation care. Examples of using technology to reach into the community via tele-rehabilitation, as well as exam­ples of reaching students in a manner aligned with their scholastic goals, are discussed.


Author(s):  
Maria T. Schultheis ◽  
Matthew Doiron

Over the course of its history, the field of neuropsychology has shifted its focus to meet the demands of the medical landscape. Before the advent of neuroimaging, neuropsychologists were relied on to determine brain lesion location and to diagnose brain-behavior pathologies. As time progressed, neuroimaging was able to provide faster and more consistent lesion identification and neuropsychology began to adapt its skills and services for other related fields, such as education, law, and rehabilitation. As a result, some neuropsychological methods were adapted to assess broader cognitive functions in a variety of populations and the general public; however, these assessments have been heavily rooted in the field’s diagnostically focused past, which creates limitations in the ecological validity of this approach. Ecological validity can be generally defined as a measure’s ability to predict functional performance or mimic activities of everyday living (i.e., performance at work, driving). For example, batteries of neuropsychological tests and questionnaires have been used to infer level of function and general performance at work or school. These batteries were developed due their statistical associations with different populations, concordance with neurological theories and constructs, and general face validity. However, very few assessments resembled any activity a person would perform in daily life. For many measures, ecological validity was defined by correlating performance with everyday functioning (verdicality; Franzen & Wilhelm, 1996). In contrast, another approach to ecological validity involved designing measures to resemble or mimic an everyday function (verisimilitude; Franzen & Wilhelm, 1996). The major difference between the two approaches determines the primary goal of designing the measure at the onset. It must be decided if the measure will prioritize construct validity at the onset and subsequently infer a link to everyday function, or vice versa. Many researchers interested in predicting functional outcome have relied on verisimilitude, as it more closely resembles “real-world” performance; however, it often comes at a cost of interpretability within the context of current neuropsychological frame­works and models.


Author(s):  
Shane S. Bush ◽  
Philip Schatz

The role of technology in neuropsychological practice has expanded dramatically in recent years, and its presence and evolving nature provide both exciting opportunities and sizeable risks that challenge practitioners ethically. Computerized test administration, scoring, and interpretation are now so common that is it hard to imagine a neuropsychologist’s practice that does not incorporate some combination of these technologies. Some of the most commonly used measures have become so complex or offer so many variables to consider that their scoring and interpretation would be extremely difficult, if not prohibitive, without the use of technology. Additionally, assessment of some cognitive constructs, such as sustained attention or response time, typically requires a computer for administration. Without computers for assessing such constructs, the understanding of the test taker’s cognitive abilities would be limited, and the decision to forgo use of such measures would not be consistent with optimal practice. Some referral sources, particularly in forensic contexts, specifically require the use of measures that are computer-administered, scored, and/or interpreted. Finally, computers, or other technologic devices, such as tablets, are now widely used by practitioners for completing and storing reports and other documentation, and telecommunications like email are commonly used for transmitting reports. Thus, technology now permeates the practice of clinical neuropsychology and will likely continue to do so forever. Even practitioners who prefer to limit use of technology must accept that it is here and is here to stay. This is not a bad thing. There are many advantages to the use of digital assessment and data storage. As Wahlstrom (in press) stated: After decades of incremental technological advancements, neuropsychology is beginning to see a rapid expansion of digital applications available to clinicians. In the short-term, these applications promise to replace paper materials and will make testing more efficient, accurate, and engaging for both the examinee and examiner.


Author(s):  
Thomas D. Parsons ◽  
Timothy McMahan

Neuropsychologists are increasingly being asked to determine whether a patient can return to work, classroom, or play (e.g., sports). A difficulty for the neuropsychological assessment of cognitive functioning is that patients’ performance on a cognitive test may have little or no predictive value for how they may perform in a real-world situation (Burgess, Alderman, Evans, Emslie, & Wilson, 1998; Chaytor, Schmitter-Edgecombe, & Burr, 2006). To address this issue, neuropsychologists are increasingly emphasizing the need for tasks that represent real-world func­tioning and that tap into a number of executive domains (Chaytor & Schmitter-Edgecombe, 2003; Jurado & Rosselli, 2007). Burgess and colleagues (2006) argue that the majority of neuropsychological assessments currently in use today were developed to assess abstract cognitive “constructs” without regard for their ability to predict “functional” behavior. For example, although the construct-driven Wisconsin Card Sorting Test (WCST) is one of the most widely used measures of executive function, it was not originally developed as a measure of executive functioning. Instead, the WCST was preceded by a number of sorting measures that were developed from observations of the effects of brain damage (e.g., Weigl, 1927). While Milner (1963) found that patients with dorsolateral prefrontal lesions had greater difficulty on the WCST than patients with orbitofrontal or nonfrontal lesions, other studies have shown that patients with frontal lobe pathology do not always differ from control subjects on the WCST (Stuss et al., 1983). Some may argue that while there have been some inconsistencies in the literature, data from the construct-driven WCST do appear to provide information relevant to the constructs of set shifting and working memory. However, it can also be argued that the data do not necessarily offer information that would allow a neuropsychologist to predict what situations in everyday life require the abilities that the WCST measures. A number of investigators have argued that performance on traditional tests has little correspondence to everyday activities of daily living. This can leave the neuropsychologist uncertain of the efficacy of the tests for predicting the way in which patients will manage in their everyday lives (Bottari, Dassa, Rainville, & Dutil, 2009; Manchester, Priestly, & Howard, 2004; Sbordone, 2008).


Author(s):  
Thomas D. Parsons ◽  
Robert L. Kane

Other chapters in this volume focus on the use of technology to enhance and expand the field of neuropsychology. Some of the enhancements are natural outgrowths of trends present in society at large and involve updating the assessment process to make it more efficient and reliable. Computerized approaches to assessment frequently use off-the-shelf technology, in some cases to administer traditional style tests, while in others to present tasks not readily accomplished with test booklets and paper (see Section II of this book on “Beyond Paper-and-Pencil Assessment”). The computer has also permitted the implementation of new testing paradigms such as scenario-based assessment and the use of virtual reality (see Section III: “Domain and Scenario-based Assessment”). The use of the computer has also made possible efforts to expand access to care through the development of efficient test batteries and telemedicine-based assessment (see Chapter 5 on Teleneuropsychology). The use of computers, the ability to implement life-like scenarios in a controlled environment, and tele­medicine will also expand available approaches to cognitive remediation with cellphones augmenting the ability of individuals to engage in self-monitoring. The integration of neuroimaging into the assessment process was clearly presented in the chapter in this volume by Erin Bigler (see also Section IV of this book on “Integrating Cognitive Assessment with Biological Metrics”). An addi­tional role for neuroimaging is the use of its ever evolving techniques and methods to model neural networks and to refine our understanding of how the brain works and how best to conceptualize cognitive domains. Both neuroimaging to model neural networks and the role of neuroinformatics will be discussed in the remaining sections of this chapter on some prospects for a future computational neuropsychology. Technological advances in neuroimaging of brain structure and function offer great potential for revolutionizing neuropsychology (Bilder, 2011). While neuroimaging has taken advantage of advances in computerization and neuroinfor­matics, neuropsychological assessments are outmoded and reflect nosological attempts at classification that occurred prior to contemporary neuroimaging (see Chapter 13 in this volume).


Author(s):  
Gaën Plancher ◽  
Pascale Piolino

Memory is one of the most important cognitive functions in a person’s life. Memory is essential for recalling personal memories and for performing many everyday tasks, such as reading, playing music, returning home, and planning future actions, and, more generally, memory is crucial for interacting with the world. Determining how humans encode, store, and retrieve memories has a long scientific history, beginning with the classical research by Ebbinghaus in the late 20th century (Ebbinghaus, 1964). Since this seminal work, the large number of papers published in the domain of memory testifies that understanding memory is one of the most important challenges in cognitive neurosciences. With population growth and population aging, understanding memory failures both in the healthy elderly and in neurological and psychiatric conditions is a major societal issue. A substantial body of evidence, mainly from double dissociations observed in neuropsychological patients, has led researchers to consider memory not as a unique entity but as comprising several forms with distinct neuroanatomical substrates (Squire, 2004). With reference to long-term memory, episodic memory may be described as the conscious recollection of personal events combined with their phenomenological and spatiotemporal encoding contexts, such as recollecting one’s wedding day with all the contextual details (Tulving, 2002). Episodic memory is typically opposed to semantic memory, which is viewed as a system dedicated to the storage of facts and general decontextualized knowledge (e.g., Paris is the capital of France), including also the mental lexicon. Episodic memory was initially defined by Tulving as a memory system specialized in storing specific experiences in terms of what happened and where and when it happened (Tulving, 1972). Later, phenomenological processes were associated with the retrieval of memories (Tulving, 2002). Episodic memory is assumed to depend on the self, and involves mental time travel and a sense of reliving the original encoding context that includes autonoetic awareness (i.e., the awareness that this experience happened to oneself, is not happening now, and is part of one’s personal history).


Author(s):  
Joe Edwards ◽  
Thomas D. Parsons

Neuropsychological assessment has a long history in the United States military and has played an essential role in ensuring the mental health and operational readiness of service members since World War I (Kennedy, Boake, & Moore, 2010). Over the years, mental health clinicians in the military have developed paper-and-pencil assessment instruments, which have evolved in terms of psychometric rigor and clinical utility, but not in terms of technological sophisti­cation. Since the advent of modern digital computing technology, considerable research has been devoted to the development of computer-automated neuropsychological assessment applications (Kane & Kay, 1992; Reeves, Winter, Bleiberg, & Kane, 2007), a trend that is likely to continue in the future. While many comparatively antiquated paper-and-pencil-based test instruments are still routinely used, it is arguably only a matter of time until they are supplanted by more technologically advanced alternatives. It is important to note, however, that questions have been raised about the ecological validity of many commonly used traditional neuropsychological tests, whether paper-and-pencil-based or computerized (Alderman, Burgess, Knight, & Henman, 2003; Burgess et al., 2006; Chaytor & Schmitter- Edgecombe, 2003; Chaytor, Schmitter-Edgecombe, & Burr, 2006; Parsons, 2016a; Sbordone, 2008). In the context of neuropsychological testing, ecological validity generally refers to the extent to which test performance corresponds to real-world performance in everyday life (Sbordone, 1996). In order to develop neuropsychological test instruments with greater ecological validity, investigators have increasingly turned to virtual reality (VR) technologies as a means to assess real-world performance via true-to-life simulated environments (Campbell et al., 2009; Negut, Matu, Sava, & Davis, 2016; Parsons, 2015a, 2015b, 2016a). Bilder (2011) described three historical and theoretical formulations of neuropsychology. First, clinical neuropsychologists focused on lesion localization and relied on interpretation without extensive normative data. Next, clinical neuropsychologists were affected by technological advances in neuroimaging and as a result focused on characterizing cognitive strengths and weaknesses rather than on differential diagnosis.


Author(s):  
Robert L. Kane ◽  
C. Munro Cullum

The growth of telemedicine has been rapid. Initially, telemedicine was seen as a way to bring services to remote areas that lacked access to aspects of health­care delivered through traditional means. This view of telemedicine has changed. Current views toward telemedicine have broadened, with telemedicine now viewed as an effective way to deliver various health services and to bring together patients and providers to increase access to care in various locations and communities. Reimbursement has been a challenge for some aspects of telemedicine development. Initially, Medicare limited reimbursement for telehealth to designated underserved areas. This approach to telehealth reimbursement has lagged behind developments in the field and has been challenged by various groups and legislative initiatives. In April 2016, the Centers for Medicare and Medicaid Services (CMS) released its Managed Care Final Rule (Federal Register, 2016) with wording that potentially will permit reimbursement for expanded telemedicine-based services. The revised standards, in attempting to ensure that Medicaid beneficiaries have reasonable access to care, acknowledge a role for technology and telemedicine. The impact the new standards will have on the development of telemedicine throughout the United States will become evident with time. Tele-mental health has grown along with other aspects of remote healthcare delivery. Extant literature supports the use of remotely delivered telehealth for a variety of conditions and services, including remote psychiatric consultation, diagnosis, and various therapies (Myers & Turvey, 2012; Shore, 2013). However, the idea that one can provide an adequate neuropsychological evaluation remotely is newer and less intuitive, and would appear to have obvious challenges. Neuropsychological examinations frequently require the use of test stimuli that the examinee has to handle and manage, such as blocks, pencils, or other manipulatives. Some tests, such as the Wisconsin Card Sorting Test (Heaton, 2003), have been adapted for computer but not for Internet-based or remote administration. In some approaches to neuropsychological assessment, the examiner takes careful note of the specific strategies examinees employ when attempting to perform tasks. Hence, performing an examination when the examiner and the patient are in different locations can seem daunting.


Sign in / Sign up

Export Citation Format

Share Document