Because this piece does not have an abstract, we have provided for your benefit the first 3 sentences of the full text.
The article by Brown et al addresses the crucial issue of detection of suicidal behavior and nonsuicidal self-injury in clinical practice. The research includes numerous strengths in terms of research design and statistical analysis. The article compared the results of a structured interview to clinical diagnoses and found that although the agreement was high (κ ~ 0.76), the structured interview detected or reclassified 18% of the cases (n = 23) as having a history of a suicide attempt.
This work may not be copied, distributed, displayed, published, reproduced, transmitted, modified, posted, sold, licensed, or used for commercial purposes. By downloading this file, you are agreeing to the publisher’s Terms & Conditions.
See article by Brown et al
The article by Brown et al1 addresses the crucial issue of detection of suicidal behavior and nonsuicidal self-injury in clinical practice. The research includes numerous strengths in terms of research design and statistical analysis. The article compared the results of a structured interview to clinical diagnoses and found that although the agreement was high (κ ~ 0.76), the structured interview detected or reclassified 18% of the cases (n = 23) as having a history of a suicide attempt. Of these, slightly more than half were reclassifications of what the clinician had recorded as nonsuicidal self-injury (n = 4) or some other behavior (n = 8), and 11 were cases for which there was no documentation of any concern about suicidal behavior in the clinical record. Reassuringly, the cases detected only via the structured interview tended to be milder cases; the clinicians did an excellent job of detecting and documenting the recent and severe events. Brown and coauthors1 do a good job of conveying the clinical importance of even these less severe events, which they note have important prognostic value for future suicidal behavior and thus warrant careful clinical documentation and management.
It would be a mistake to conclude that these results mean that the status quo of clinical assessment is sufficient. The kappa (κ) value reported here was exceptionally high. Agreement between clinical diagnoses and diagnoses based on semistructured clinical interviews tends to be poor.2,3 A recent comparison of billing diagnoses versus diagnoses based on semistructured interview found a median κ value of 0.37,4 and even that was better than the median κ value of 0.29 for externalizing problems and 0.28 for internalizing problems based on a meta-analysis of 38 articles and almost 16,000 probands.5 In the retest reliability for the DSM-5 Field Trials, nonsuicidal self-injury obtained a κ value of 0.03 at the 1 site that was able to evaluate it.6 Compared to the literature, the results of Brown et al1 are an outlier, most likely resulting from a combination of factors including reliance on clinicians at academic institutions with a culture of excellence in research and evaluation of suicidal ideation and behavior. In the larger context, those findings represent a “best-case scenario” owing to the strengths of design.
These results underscore the gap between more typical clinical practice and the results that would be possible using a more structured approach. It is sobering that agreement is so poor, across both common and contentious diagnoses, and the evidence underscores the value of more structured approaches for clinical issues such as nonsuicidal self-injury behavior. The DSM-5 Field Trials also amplify the conclusion from Brown et al1 that there is great need for tools that support consistency in nomenclature when describing the range of self-injurious behaviors.7,8 With DSM-5, clinicians were not given structured interview tools, but they were all aware that they were participating in the field trial and had prior knowledge of what diagnoses were under consideration, showing that cognizance of the diagnoses by itself was not enough to produce agreement.
The potential value of checklists and structured assessments is not limited to psychiatry. Structured methods have demonstrated value in surgery and flight control and arguably could improve performance in most areas requiring processing of complex information.9 Mental health, however, appears more reluctant than other medical disciplines to embrace structured methods. Practitioners feel that structured approaches compromise their professional autonomy and fear that such approaches would compromise rapport with patients.10 Data directly contradict these concerns: patients report high levels of rapport after structured evaluations and prefer them to unstructured interviews, perceiving structured evaluations as more thorough and providing a better understanding of their situation and needs.11
Brown et al1 found that structured approaches did not require inordinate amounts of time, and they significantly increased sensitivity to clinically meaningful levels of self-harm behaviors. There are a variety of tools available that have demonstrated good reliability and validity for the assessment of suicidal ideation and behavior, and several are relatively brief, particularly when administration uses screening questions and skip-outs.12-14 Adoption of these methods in clinical settings, especially those where the risk of suicidal behavior is elevated, would improve the accuracy of clinical assessment and the ensuing decision making, even more than the results from Brown and colleagues1 might appear to suggest.
Author affiliation: Department of Psychology, University of North Carolina at Chapel Hill, Chapel Hill.
Potential conflicts of interest: Dr Youngstrom has served as a consultant for Lundbeck and Otsuka.
Funding/support: None reported.
REFERENCES
1. Brown GK, Currier GW, Jager-Hyman S, et al. Detection and classification of suicidal behavior and nonsuicidal self-injury behavior in emergency departments. J Clin Psychiatry. 2015;76(10):1397-1403.
2. Garb HN. Studying the Clinician: Judgment Research and Psychological Assessment. Washington, DC: American Psychological Association; 1998. doi:10.1037/10299-000
3. ׆gisdóttir S, White MJ, Spengler PM, et al. The Meta-Analysis of Clinical Judgment Project: fifty-six years of accumulated research on clinical versus statistical prediction. Counseling Psychol. 2006;34(3):341-382. doi:10.1177/0011000005285875
4. Jensen-Doss A, Youngstrom EA, Youngstrom JK, et al. Predictors and moderators of agreement between clinical and research diagnoses for children and adolescents. J Consult Clin Psychol. 2014;82(6):1151-1162. PubMed doi:10.1037/a0036657
5. Rettew DC, Lynch AD, Achenbach TM, et al. Meta-analyses of agreement between diagnoses made from clinical evaluations and standardized diagnostic interviews. Int J Methods Psychiatr Res. 2009;18(3):169-184. PubMed doi:10.1002/mpr.289
6. Regier DA, Narrow WE, Clarke DE, et al. DSM-5 Field Trials in the United States and Canada, part 2: test-retest reliability of selected categorical diagnoses. Am J Psychiatry. 2013;170(1):59-70. doi:10.1176/appi.ajp.2012.12070999 PubMed
7. Posner K, Oquendo MA, Gould M, et al. Columbia Classification Algorithm of Suicide Assessment (C-CASA): classification of suicidal events in the FDA’s pediatric suicidal risk analysis of antidepressants. Am J Psychiatry. 2007;164(7):1035-1043. PubMed doi:10.1176/ajp.2007.164.7.1035
8. Meyer RE, Salzman C, Youngstrom EA, et al. Suicidality and risk of suicide—definition, drug safety concerns, and a necessary target for drug development: a consensus statement. J Clin Psychiatry. 2010;71(8):e1-e21. PubMed doi:10.4088/JCP.10cs06070blu
9. Gawande A. The Checklist Manifesto. New York, NY: Penguin; 2010.
10. Bruchmüller K, Margraf J, Suppiger A, et al. Popular or unpopular? therapists’ use of structured interviews and their estimation of patient acceptance. Behav Ther. 2011;42(4):634-643. PubMed doi:10.1016/j.beth.2011.02.003
11. Suppiger A, In-Albon T, Hendriksen S, et al. Acceptance of structured diagnostic interviews for mental disorders in clinical practice and research settings. Behav Ther. 2009;40(3):272-279. PubMed doi:10.1016/j.beth.2008.07.002
12. Posner K, Brown GK, Stanley B, et al. The Columbia-Suicide Severity Rating Scale: initial validity and internal consistency findings from three multisite studies with adolescents and adults. Am J Psychiatry. 2011;168(12):1266-1277. PubMed doi:10.1176/appi.ajp.2011.10111704
13. Coric V, Stock EG, Pultz J, et al. Sheehan Suicidality Tracking Scale (Sheehan-STS): preliminary results from a multicenter clinical trial in generalized anxiety disorder. Psychiatry (Edgmont). 2009;6(1):26-31. PubMed
14. Lindenmayer JP, Czobor P, Alphs L, et al; InterSePT Study Group. The InterSePT scale for suicidal thinking reliability and validity. Schizophr Res. 2003;63(1-2):161-170. PubMed doi:10.1016/S0920-9964(02)00335-3
Submitted: October 7, 2014; accepted October 8, 2014.
Corresponding author: Eric A.Youngstrom, PhD, Department of Psychology, University of North Carolina at Chapel Hill, CB #3270, Davie Hall, Chapel Hill, NC 27599-3270 ([email protected]).
J Clin Psychiatry 2015;76(10):e1331-e1332
dx.doi.org/10.4088/JCP.14com09573
© Copyright 2015 Physicians Postgraduate Press, Inc.
Save
Cite
Advertisement
GAM ID: sidebar-top