This website uses cookies. By continuing to browse TherapyToday.net you are agreeing to our use of cookies, which you can read more about here.
 

Welcome

Welcome to TherapyToday.net, an award-winning website published by the British Association for Counselling & Psychotherapy.

The site contains a searchable archive of over 500 articles published in Therapy Today since September 2005. Some articles are freely available, whereas others can be bought via the online purchasing system (log in to access more articles). To subscribe, click here.

Volume 24
Issue 5
June 2013

 

In an interview with Therapy Today, Michael Barkham outlines the PRaCTICED study, which BACP has commissioned from the University of Sheffield to evaluate the effectiveness of Counselling for Depression (CfD) in comparison with CBT in primary care settings. The randomised controlled trial will recruit 550 participants with moderate to severe depression from routine referrals to the Sheffield IAPT service. A parallel trial will use the same sample to research what the practitioners bring to the therapeutic encounter that results in better outcomes, regardless of the modality they use

  • Counselling for Depression vs CBT

  • by

  • Michael Barkham
  • Professor Michael Barkham and his research team at the Centre for Psychological Services Research, University of Sheffield, have been awarded funding (subject to contract) by the BACP Research Foundation’s Scientific Committee to conduct a randomised controlled trial (RCT) to test the effectiveness of Counselling for Depression (CfD) in primary care (see BACP News, Therapy Today, May 2013). Here Michael Barkham explains the focus, scope and methodology of the trial and how its findings may benefit counselling and its clients.

    First, what is your interest, personally and professionally, in conducting this trial?
    My interest in this trial is at many levels. At a personal level, I have always valued the discipline and profession of counselling. I carried out my doctoral research on the concept of accurate empathy and my interest in the core conditions has never dwindled. As a researcher, I have had an abiding commitment to ensuring a level playing field between contrasting theoretical interventions – that is, securing a situation whereby we can generate a fair test between such interventions, be they, for example, counselling, cognitive behavioural therapy or psychodynamic therapy.

    In addition, I have an abiding commitment to securing a broad and robust evidence base – hence I have a particular commitment to combining the methodological strengths of RCTs with those of practice-based studies. When I considered tendering for this research work, I saw the potential for bringing together all these interests.

    What is the trial’s significance for the world of counselling, and also for you?
    For me, professionally, I see this as a methodological challenge to bring the worlds of the RCT and practice-based studies together so that practitioners in everyday services see that they can contribute to the evidence base in a meaningful way that has direct implications for their client work.

    For the world of counselling, I think this trial is hugely important. We – whether we are researchers, practitioners, managers, commissioners or, most importantly, clients – need robust evidence for the effectiveness (or otherwise) of bona fide interventions. We live in a climate where there is increasing scrutiny of our interventions and NICE has a huge influence on decisions about which psychological interventions are recommended and which are not. Researchers need to deliver evidence that is acceptable to the stringent criteria employed by NICE. The evidence also has to fit the needs of stepped care that is driving much of service development.

    So, I think little would be achieved by, for example, trying to show that CfD was superior to no intervention or self-help. Equally, little would be added by restricting the client group to those with mild or moderately severe depression. The challenge is to test the effectiveness of CfD with clients presenting with moderate and severe depression and to test whether it is non-inferior to cognitive behavioural therapy (CBT), which can be viewed as the benchmark.

    The trial will provide answers to these questions. In addition, analyses will help us understand which factors may lead some people to achieve better outcomes. This information will yield an evidence-based procedure for deciding which intervention might be better suited to a particular person.

    Can you outline briefly your trial’s methodology? What are you attempting to establish and how will you do this?
    There are many components to the trial, so let me introduce the title and then unpack it. The acronym for the trial is PRaCTICED, which stands for Pragmatic Randomised Controlled Trial assessing the non-Inferiority of Counselling and its Effectiveness for Depression.

    The primary aim of the trial is to obtain evidence on the effectiveness of CfD. It is called a pragmatic trial as it is being carried out in the routine setting of the Sheffield IAPT service, drawing on people who meet a specific threshold for moderate or severe depression and are stepped up to counselling via its psychological wellbeing practitioners (PWPs). The comparator treatment will be CBT, also delivered in the Sheffield IAPT service. Clients selected for the trial will be randomly allocated to receive either CfD or CBT. The trial is termed a non-inferiority trial as the hypothesis is that the outcomes of clients receiving CfD will not be significantly inferior to the outcomes of clients receiving CBT.

    The trial requires a total of 550 people – 225 for each intervention. The number is high because the trial needs to be adequately powered to detect what may be a small difference between these interventions. Achieving this number is possible because the Sheffield IAPT service sees a huge number of clients each year.

    But it also provides us with another exciting component to the study. In most trials many people decline or are not selected for the trial but we have no way of knowing what their outcomes are from the treatment they subsequently receive. This has fuelled the view that study samples are not representative of people seen in routine services. But, because PRaCTICED is based in a routine service, any one who is excluded or declines to join the trial will receive treatment in the usual IAPT service anyway. So there will be data on their outcomes too, and we will be able to use these data to make comparisons between our trial findings, the outcomes of those who declined to join the trial, those who were excluded, and with the IAPT service as a whole.

    Clients will be assessed to ensure that they meet the required threshold of depression. I am aware that there is a debate about the value of diagnosis but the bottom line is that, unless we have this component and can meet specific criteria regarding diagnosis, then the trial – whatever the outcomes – will not be considered by NICE.

    The interventions will be delivered by IAPT practitioners who have satisfied the IAPT criteria for either CBT high-intensity or CfD training. Training for CfD practitioners and supervisors is taking place and the trial will start once these practitioners – or a sufficient number of them – are ready. As you might gather, there is a huge amount of work to be done up-front in setting up the trial. Like many tasks, good preparation is the key to success.

    The funding is for three years and we have designed the study to fit within this timeframe. We have calculated the throughput of clients from earlier data in the Sheffield IAPT service.

    There is a growing body of opinion that RCTs do not capture the breadth and complexity of the outcomes of counselling/psychotherapy and that NICE, for example, should recognise the validity of qualitative trials and case studies. What are your views on this and how will this RCT augment the evidence base for counselling as an effective intervention for depression?
    RCTs denote one particular design for securing evidence. What is debatable is the weight placed on this specific approach at the expense of other methods.

    As many people know, I have argued long and hard for a complementary paradigm of practice-based evidence. But what this study offers us is the opportunity to combine trials and practice-based approaches by embedding it in a practice-based setting. I am on record as saying that we cannot build a robust knowledge base for the psychological therapies on trials alone – nor, in fact, on any single methodological approach alone. So we absolutely need trials methodology as part of a wider knowledge base. My aspiration would be that we move away from a sense of hierarchies of evidence towards, for want of a better expression, a landscape of evidence that considers the whole picture – the overall weight of evidence: one that values all forms of evidence, recognising that each approach has its own strengths but also its own vulnerabilities.

    Similarly, there seems to be a growing view, particularly among person-centred and psychoanalytic counsellors/therapists, that CBT has become ubiquitous and is being used with clients for whom a long-term, less behavioural approach would be more suitable. Do you have views on this? Why compare CfD with CBT at all as a measure of its effectiveness?
    Having trained as a clinical psychologist, I also did further training in CBT and also in psychodynamic-interpersonal therapy. So, from a practice viewpoint, I know the value of a range of interventions. But from a research perspective, I also know that there is robust evidence for the efficacy and effectiveness of CBT. So CBT is the benchmark against which other interventions need to be tested.

    Practitioners may have their own views about the appropriateness or effectiveness of particular therapies, but endorsement of specific therapies at a national policy level will only be based on scientific evidence – and that is why we have selected CBT as the comparator. I think it is important that we offer clients a choice of bona fide therapies; no single brand of therapy will suit all clients. But we need to show that what we are offering is effective.

    What about the outcomes measures – clients will be randomly allocated to either CBT or CfD. Given that the two approaches are quite different and work in different ways, how are they comparable? Are there any ethical considerations in the random allocation?
    To provide evidence to NICE of the value of any intervention for depression, there needs to be a bona fide depression measure. RCTs require a single primary outcome measure and we have chosen the PHQ-9, which is mandatory in IAPT services and focuses on depression, as our primary index of clients’ improvement. We will also use the Beck Depression Inventory as a secondary outcome measure, along with the CORE-OM, which taps a broader constellation of items, including aspects of functioning (eg social and close relationships). In addition, clients will complete the standard IAPT measures, including the PHQ-9 and GAD-7 weekly.

    In terms of any ethical concerns about the randomisation, clients have to agree to be randomised – that is, they understand that they stand an equal chance of receiving either intervention. If clients have a strong preference for one intervention over the other – and I stress the word ‘strong’ – then they will not be entered into the trial.

    The trial puts a particular emphasis on ‘equipoise’, and on ‘noninferiority of outcomes’. Could you unpack the significance of these terms in relation to what you hope to learn from it?
    The term equipoise is crucial in carrying out a trial. It refers to being in a state of balance. In other words, we are not favouring one treatment over the other. We are putting as much emphasis on CBT and its delivery as we are on CfD. We want the CBT that is delivered to be the best possible – and we also want this for the CfD. So this equipoise is an absolute position. It is not ethical to run a trial from a position of non-equipoise.

    Noninferiority is easiest to explain by briefly describing the three main types of trials: superiority trials, equivalence trials and noninferiority trials. Most trials have aimed to determine the superiority of one intervention over another condition – an obvious example would be a wait-list condition. By contrast, an equivalence trial would predict no difference between treatments. However, the numbers of clients required for such trials is huge because you are trying to detect no difference with confidence. This was simply not practical.

    If the comparison being made is between two bona fide treatments – in this instance, CfD and CBT – and the prediction is that CfD will not be meaningfully inferior to CBT, then we can frame this as a noninferiority trial. So, it may be that the outcomes for CfD do not quite match those of CBT but that the difference is sufficiently small that, for practical purposes, a practitioner would be unlikely to tell the difference. In this situation CfD would be deemed as not being inferior to CBT within a predetermined margin. The margin is operationalised in terms of points on the PHQ-9, which is our primary outcome measure.

    Importantly, we have set that margin a priori – that is, before running the trial. It is not a matter of getting the results and then debating whether the difference is meaningful or not. Part of the requirement of a trial is that the protocol sets out all these criteria ahead of analysing the data.

    Readers may like to know more about you and your team – your areas of interest and expertise.
    I have been involved in research on the psychological therapies since 1985 – the Sheffield Psychotherapy Projects – and then in promoting routine outcome measurement – the development of CORE. This has led me to promote practice-based evidence as a paradigm for practitioners, which has then led me to my current focus on therapist and service effects – that is, understanding the natural phenomenon of variability across practitioners and services. I think these issues are as important as focusing on specific interventions.

    The team of investigators is one short of a rugby team in numbers – there are 14 of us. There is a core Sheffield team – me and Dave Saxon (project manager/statistics), Gillian Hardy (clinical psychology/process research), Steve Kellett (clinical psychology/CBT), Glenn Waller (clinical psychology/evidence-based treatments), John Brazier (health economist), and Sue Shaw (service user). We also have Mike Bradburn, who is our expert from the university’s Clinical Trials Research Unit. From Sheffield Health and Social Care NHS Foundation Trust, we have Simon Bennett (service manager). Further afield, the co-investigators are Dr Lynne Gabriel (York St John) and Professors Pete Bower (Manchester), Robert Elliott (Strathclyde), Michael King (UCL) and Steve Pilling (UCL). There is also funding for a full-time research assistant and a PhD student, who will assist with the trial. There are also other people at Sheffield currently working on other projects and they will become involved in the trial at later stages.

    It should also be said that, even in these early stages, BACP members have been incredibly helpful and supportive, particularly in relation to training issues, which is very encouraging and for which I am very grateful.

    You are publishing a regular newsletter reporting on the trial’s progress. Is this usual in such trials?
    The newsletter is specifically for the practitioners in the Sheffield IAPT service – the counsellors, cognitive behavioural therapists, and the psychological wellbeing practitioners. We feel it is important that we give the practitioners as much information as possible about the trial and its ongoing progress; we are dependent on them. So we are sending out a newsletter every month or thereabouts. At this stage it is only for them, as it is a good way of communicating directly with them. Once the trial gets going, we will give brief updates on both the BACP and University of Sheffield websites, which will be accessible to everyone.

    And finally, there is an ancillary study, unconnected with the RCT, that will be using the same sample groups and data. Could you describe that for us and how you think it may augment knowledge and practice?
    A parallel research study is being carried out by Jo-Ann Pereira under my and Steve Kellett’s supervision. This focuses on components that the practitioners bring to their practice, be it counselling or CBT. So it focuses on practitioners as people rather than the interventions they are delivering. It seems to me that, if we are to understand and enhance psychological therapies, we need to pay as much attention to practitioners as we do to interventions. Practitioners are our greatest resource and it seems appropriate to place them at the heart of research activity. Data collection for this study will be completed before practitioners begin the trial, in order to keep the study and trial distinct, but the data from the study will provide rich and complementary material to that from the trial.

    Thank you, and good luck!


    Michael Barkham is Professor of Clinical Psychology and Director of the Centre for Psychological Services Research at the University of Sheffield. Previously he was Professor of Clinical and Counselling Psychology and Director of the Psychological Therapies Research Centre at the University of Leeds. He is a Fellow of the British Psychological Society, a past Joint Editor of the British Journal of Clinical Psychology, and a past Vice President of the Society for Psychotherapy Research (UK Chapter).

<< Previous article Back to top ^ Next article >>