scholarly journals The Value of Bayesian Methods for Accurate and Efficient Neuropsychological Assessment

Author(s):  
Hanne Huygelier ◽  
Céline R. Gillebert ◽  
Pieter Moors

Abstract Objective: Clinical neuropsychology has been slow in adopting novelties in psychometrics, statistics, and technology. Researchers have indicated that the stationary nature of clinical neuropsychology endangers its evidence-based character. In addition to a technological crisis, there may be a statistical crisis affecting clinical neuropsychology. That is, the frequentist null hypothesis significance testing framework remains the dominant approach in clinical practice, despite a recent surge in critique on this framework. While the Bayesian framework has been put forward as a viable alternative in psychology in general, the possibilities it offers to clinical neuropsychology have not received much attention. Method: In the current position paper, we discuss and reflect on the value of Bayesian methods for the advancement of evidence-based clinical neuropsychology. Results: We aim to familiarize clinical neuropsychologists and neuropsychological researchers to Bayesian methods of inference and provide a clear rationale for why these methods are valuable for clinical neuropsychology. Conclusion: We argue that Bayesian methods allow for a more intuitive answer to our diagnostic questions and form a more solid foundation for sequential and adaptive diagnostic testing, representing uncertainty about patients’ observed test scores and cognitive modeling of test results.

2001 ◽  
Vol 17 (1) ◽  
pp. 114-122 ◽  
Author(s):  
Steven H. Sheingold

Decision making in health care has become increasingly reliant on information technology, evidence-based processes, and performance measurement. It is therefore a time at which it is of critical importance to make data and analyses more relevant to decision makers. Those who support Bayesian approaches contend that their analyses provide more relevant information for decision making than do classical or “frequentist” methods, and that a paradigm shift to the former is long overdue. While formal Bayesian analyses may eventually play an important role in decision making, there are several obstacles to overcome if these methods are to gain acceptance in an environment dominated by frequentist approaches. Supporters of Bayesian statistics must find more accommodating approaches to making their case, especially in finding ways to make these methods more transparent and accessible. Moreover, they must better understand the decision-making environment they hope to influence. This paper discusses these issues and provides some suggestions for overcoming some of these barriers to greater acceptance.


2008 ◽  
Vol 54 (11) ◽  
pp. 1872-1882 ◽  
Author(s):  
Eva Nagy ◽  
Joseph Watine ◽  
Peter S Bunting ◽  
Rita Onody ◽  
Wytze P Oosterhuis ◽  
...  

Abstract Background: Although the methodological quality of therapeutic guidelines (GLs) has been criticized, little is known regarding the quality of GLs that make diagnostic recommendations. Therefore, we assessed the methodological quality of GLs providing diagnostic recommendations for managing diabetes mellitus (DM) and explored several reasons for differences in quality across these GLs. Methods: After systematic searches of published and electronic resources dated between 1999 and 2007, 26 DM GLs, published in English, were selected and scored for methodological quality using the AGREE Instrument. Subgroup analyses were performed based on the source, scope, length, origin, and date and type of publication of GLs. Using a checklist, we collected laboratory-specific items within GLs thought to be important for interpretation of test results. Results: The 26 diagnostic GLs had significant shortcomings in methodological quality according to the AGREE criteria. GLs from agencies that had clear procedures for GL development, were longer than 50 pages, or were published in electronic databases were of higher quality. Diagnostic GLs contained more preanalytical or analytical information than combined (i.e., diagnostic and therapeutic) recommendations, but the overall quality was not significantly different. The quality of GLs did not show much improvement over the time period investigated. Conclusions: The methodological shortcomings of diagnostic GLs in DM raise questions regarding the validity of recommendations in these documents that may affect their implementation in practice. Our results suggest the need for standardization of GL terminology and for higher-quality, systematically developed recommendations based on explicit guideline development and reporting standards in laboratory medicine.


Author(s):  
Pontus Larsson ◽  
Justyna Maculewicz ◽  
Johan Fagerlönn ◽  
Max Lachmann

The current position paper discusses vital challenges related to the user experience design in unsupervised, highly automated cars. These challenges are: (1) how to avoid motion sickness, (2) how to ensure users’ trust in the automation, (3) how to ensure usability and support the formation of accurate mental models of the automation system, and (4) how to provide a pleasant and enjoyable experience. We argue for that auditory displays have the potential to help solve these issues. While auditory displays in modern vehicles typically make use of discrete and salient cues, we argue that the use of less intrusive continuous sonic interaction could be more beneficial for the user experience.


2019 ◽  
Vol 6 (6) ◽  
pp. 384-393 ◽  
Author(s):  
Renata Cífková ◽  
Mark R Johnson ◽  
Thomas Kahan ◽  
Jana Brguljan ◽  
Bryan Williams ◽  
...  

Abstract Hypertensive disorders are the most common medical complications in the peripartum period associated with a substantial increase in morbidity and mortality. Hypertension in the peripartum period may be due to the continuation of pre-existing or gestational hypertension, de novo development of pre-eclampsia or it may be also induced by some drugs used for analgesia or suppression of postpartum haemorrhage. Women with severe hypertension and hypertensive emergencies are at high risk of life-threatening complications, therefore, despite the lack of evidence-based data, based on expert opinion, antihypertensive treatment is recommended. Labetalol intravenously and methyldopa orally are then the two most frequently used drugs. Short-acting oral nifedipine is suggested to be used only if other drugs or iv access are not available. Induction of labour is associated with improved maternal outcome and should be advised for women with gestational hypertension or mild pre-eclampsia at 37 weeks’ gestation. This position paper provides the first interdisciplinary approach to the management of hypertension in the peripartum period based on the best available evidence and expert consensus.


2006 ◽  
Vol 89 (4) ◽  
pp. 913-928 ◽  
Author(s):  
G David Grothaus ◽  
Murali Bandla ◽  
Thomas Currier ◽  
Randal Giroux ◽  
G Ronald Jenkins ◽  
...  

Abstract Immunoassays for biotechnology engineered proteins are used by AgBiotech companies at numerous points in product development and by feed and food suppliers for compliance and contractual purposes. Although AgBiotech companies use the technology during product development and seed production, other stakeholders from the food and feed supply chains, such as commodity, food, and feed companies, as well as third-party diagnostic testing companies, also rely on immunoassays for a number of purposes. The primary use of immunoassays is to verify the presence or absence of genetically modified (GM) material in a product or to quantify the amount of GM material present in a product. This article describes the fundamental elements of GM analysis using immunoassays and especially its application to the testing of grains. The 2 most commonly used formats are lateral flow devices (LFD) and plate-based enzyme-linked immunosorbent assays (ELISA). The main applications of both formats are discussed in general, and the benefits and drawbacks are discussed in detail. The document highlights the many areas to which attention must be paid in order to produce reliable test results. These include sample preparation, method validation, choice of appropriate reference materials, and biological and instrumental sources of error. The article also discusses issues related to the analysis of different matrixes and the effects they may have on the accuracy of the immunoassays.


2017 ◽  
Vol 28 (10) ◽  
pp. 950-960 ◽  
Author(s):  
Linda W. Norrix ◽  
David Velenovsky

Background: The auditory brainstem response (ABR) is used to estimate behavioral hearing thresholds in infants and difficult-to-test populations. Differences between the toneburst ABR and behavioral thresholds exist making the correspondence between the two measures less than perfect. Some authors have suggested that corrections be applied to ABR thresholds to account for these differences. However, because there is no agreed upon universal standard, confusion regarding the use of corrections exists. Purpose: The primary purpose of this article is to review the reasoning behind and use of corrections when the toneburst ABR is employed to estimate behavioral hearing thresholds. We also discuss other considerations that all audiologists should be aware of when obtaining and reporting ABR test results. Results: A review of the purpose and use of corrections reveals no consensus as to whether they should be applied or which should be used. Additionally, when ABR results are adjusted, there is no agreement as to whether additional corrections for hearing loss or the age of the client are necessary. This lack of consensus can be confusing for all individuals working with hearing-impaired children and their families. Conclusions: Toneburst ABR thresholds do not perfectly align with behavioral hearing thresholds. Universal protocols for the use of corrections are needed. Additionally, evidence-based procedures must be employed to obtain valid ABRs that will accurately estimate hearing thresholds.


Diagnosis ◽  
2014 ◽  
Vol 1 (1) ◽  
pp. 39-42 ◽  
Author(s):  
Michael A. Kohn

AbstractThe real meaning of the word “diagnosis” is naming the disease that is causing a patient’s illness. The cognitive process of assigning this name is a mysterious combination of pattern recognition and the hypothetico-deductive approach that is only remotely related to the mathematical process of using test results to update the probability of a disease. What I refer to as “evidence-based diagnosis” is really evidence-based use of medical tests to guide treatment decisions. Understanding how to use test results to update the probability of disease can help us interpret test results more rationally. Also, evidence-based diagnosis reminds us to consider the costs and risks of testing and the dangers of over-diagnosis and over-treatment, in addition to the costs and risks of missing serious disease.


Sign in / Sign up

Export Citation Format

Share Document