Child Mania Rating Scale: Development, Reliability, and Validity

2007 ◽  
Vol 2007 ◽  
pp. 10-11 ◽  
Author(s):  
P.S. Jensen
2002 ◽  
Vol 10 (11) ◽  
pp. 1173-1179 ◽  
Author(s):  
Faith-Anne Dohm ◽  
Ruth H. Striegel-Moore

The Foot ◽  
2015 ◽  
Vol 25 (1) ◽  
pp. 12-18 ◽  
Author(s):  
Teresa Venditto ◽  
Lucrezia Tognolo ◽  
Rosaria Sabrina Rizzo ◽  
Cristina Iannuccelli ◽  
Luca Di Sante ◽  
...  

Author(s):  
MANI N. PAVULURI ◽  
DAVID B. HENRY ◽  
BHARGAVI DEVINENI ◽  
JULIE A. CARBRAY ◽  
BORIS BIRMAHER

Author(s):  
Ayaka Chikada ◽  
Jun Mitsui ◽  
Takashi Matsukawa ◽  
Hiroyuki Ishiura ◽  
Tatsushi Toda ◽  
...  

2021 ◽  
pp. 026553222199405
Author(s):  
Ute Knoch ◽  
Bart Deygers ◽  
Apichat Khamboonruang

Rating scale development in the field of language assessment is often considered in dichotomous ways: It is assumed to be guided either by expert intuition or by drawing on performance data. Even though quite a few authors have argued that rating scale development is rarely so easily classifiable, this dyadic view has dominated language testing research for over a decade. In this paper we refine the dominant model of rating scale development by drawing on a corpus of 36 studies identified in a systematic review. We present a model showing the different sources of scale construct in the corpus. In the discussion, we argue that rating scale designers, just like test developers more broadly, need to start by determining the purpose of the test, the relevant policies that guide test development and score use, and the intended score use when considering the design choices available to them. These include considering the impact of such sources on the generalizability of the scores, the precision of the post-test predictions that can be made about test takers’ future performances and scoring reliability. The most important contributions of the model are that it gives rating scale developers a framework to consider prior to starting scale development and validation activities.


2021 ◽  
pp. 1-8
Author(s):  
Angelo Picardi ◽  
Sara Panunzi ◽  
Sofia Misuraca ◽  
Chiara Di Maggio ◽  
Andrea Maugeri ◽  
...  

<b><i>Introduction:</i></b> The last decade has witnessed a resurgence of interest in the clinician’s subjectivity and its role in the diagnostic assessment. Integrating the criteriological, third-person approach to patient evaluation and psychiatric diagnosis with other approaches that take into account the patient’s subjective and intersubjective experience may bear particular importance in the assessment of very young patients. The ACSE (Assessment of Clinician’s Subjective Experience) instrument may provide a practical way to probe the intersubjective field of the clinical examination; however, its reliability and validity in child and adolescent psychiatrists seeing very young patients is still to be determined. <b><i>Methods:</i></b> Thirty-three clinicians and 278 first-contact patients aged 12–17 years participated in this study. The clinicians completed the ACSE instrument and the Brief Psychiatric Rating Scale after seeing the patient, and the Profile of Mood State (POMS) just before seeing the patient and immediately after. The ACSE was completed again for 45 patients over a short (1–4 days) retest interval. <b><i>Results:</i></b> All ACSE scales showed high internal consistency and moderate to high temporal stability. Also, they displayed meaningful correlations with the changes in conceptually related POMS scales during the clinical examination. <b><i>Discussion:</i></b> The findings corroborate and extend previous work on adult patients and suggest that the ACSE provides a valid and reliable measure of the clinician’s subjective experience in adolescent psychiatric practice, too. The instrument may prove to be useful to help identify patients in the early stages of psychosis, in whom subtle alterations of being with others may be the only detectable sign. Future studies are needed to determine the feasibility and usefulness of integrating the ACSE within current approaches to the evaluation of at-risk mental states.


1999 ◽  
Vol 11 (1) ◽  
pp. 34-37 ◽  
Author(s):  
I.P.A.M. Huijbrechts ◽  
P.M.J. Haffmans ◽  
K. Jonker ◽  
A. van Dijke ◽  
E. Hoencamp

SummaryAlthough the Hamilton Rating Scale for Depression (HRSD) is the most frequently used rating scale for quantifying depressive states, it has been criticized for its reliability and its usability in clinical practice. This criticism is less applying to the Montgomery-Asberg Depression Rating Scale (MADRS). Goal of the present study is to investigate the reliability and validity, and clinical relationship between the HRSD and the MADRS. For 60 out-patients with diagnosed depression (DSM IV296.2x, 296.3x, 300.40 and 311.00), the HRSD and MADRS were scored at baseline and 6 weeks later by an independent rater according to a structured interview. Also the Clinical Global Impression (CGI) was assessed by a psychiatrist. Satisfying agreement was found between the totalscores (r= .75, p>.000 en r=.92, p>.000 respectively, at baseline and 6 weeks later). Furthermore agreement was found between the items of both scales, and these agree with the clinical impression. The reliability of the MADRS is more stable than the reliability of the HRSD (α = .6367 and α =.8900 vs α = .2193 and α = .8362 at baseline and at endpoint respectively). Considering the ease of scoring both scales in one interview and the widely international use of the HRSD, scoring both the HRSD and the MADRS to measure the severity of a depression seems to be an acceptabel covenant.


Sign in / Sign up

Export Citation Format

Share Document