What are service evaluations actually for?
This month Clinical Psychology Forum, the UK profession’s in-house journal, published ‘a service evaluation of a dialectical behavioural therapy-informed skills group for community mental health service users with complex emotional needs’. The authors’ rationale, in the context of staff shortages, burnout, recruitment difficulties, long waiting lists and limited funding, was that DBT-informed skills group (DBTi-S) can ‘treat multiple clients simultaneously and its minimal, low-cost training requirements for staff’. Whilst their results showed a significant reduction in dysfunctional coping and blaming others (as measured by the Dialectical Behaviour Therapy Ways of Coping Checklist (DBT-WCCL), there were no significant results from the Warwick-Edinburgh Mental Wellbeing Scale (WEMWBS), the Work and Social Adjustment Scale (WSAS) nor the Difficulties in Emotion Regulation Scale (DERS).
As is so often the case with research into the effectiveness of psychological therapies, this service evaluation is replete with methodological problems, not least with self-report as a way of assessing outcomes, including: the unreliability of introspection and memory; participant reactivity; (i.e. response/behaviour change due to being aware of being observed); response bias (e.g. responding according to social desirability); demand characteristics (e.g. picking up information regarding what the study is looking for) (see Rust & Golombok, 1999).
More importantly perhaps, the paper also seemed to gloss over the fact that there was a 62.5% drop out rate: 24 service users across three CMHTs were accepted to attend DBTi-S groups, however only NINE of them completed the six-month programme. Interestingly, their qualitative data included the feedback that ‘three patients expressed a preference for smaller group numbers as it provided more opportunities to learn and practice the skills’. This seemed to be reported as something positive, and yet this was said by only three of the nine people who completed the programme (so 12.5% of the original cohort). With the groups running across three CMHTs, this must have meant that on average each group had 3 people in attendance.
It’s maybe also worth noting that the groups were delivered by at least two trained facilitators – Assistant Psychologists, CMHT Keyworkers and Associate Psychological Practitioners who had done the DBT Essentials training, that is, a 2 day introductory workshop.
They conclude that ‘Overall, the continuation of this intervention is likely to prove beneficial to its participants and the Trust.’
We should be surprised at this conclusion, on the basis of the limited outcomes and the drop-out rate. But it’s not surprising. This is happening all over the UK and precisely one of the reasons for our books, Team Of One and Outsight.
Is this really the best that clinical psychology has to offer?