It was the third appointment and Cathy Keough’s client – a 28-year-old man with moderate depression – was not getting better. He certainly seemed better – less anxious than his first visit, not so meandering with his answers, sleeping more and eating better. But Ms. Keough, a social worker and the director of counselling initiatives at the Calgary Counselling Centre, had the evidence in a graph in her hands: He was as depressed as when he had started therapy. Before every session, the young man filled out a standardized 45-item questionnaire on an iPad in the waiting room, to track how well he was doing from one week to the next. The survey covers a wide range of subjects, including sleep, mood, alcohol and drug use, and stress at work. Clients score themselves on statements such as “I have no interest in things” or “I am concerned about family troubles.” The system then flags worrisome trends. Now, Ms. Keough had the data in front of her: The therapy wasn’t working; his scores weren’t improving. “What am I missing?” she wondered.
Therapists, as it happens, often miss things – a reality readily admitted to in last year’s How and Why Are Some Therapists Better Than Others?, a collection of essays by North American experts that considers how therapists might get better at their jobs. Like all human beings, they can be tricked by bias or lose perspective, despite their training. They get bored. They have distracted days. Another troublesome research finding: They tend to overestimate how well their patients are doing, and miss the ones who are getting worse. Louis Castonguay, a Canadian psychologist at Penn State University, who co-edited the book, says, “It’s humbling. But we aren’t as good as we like to think we are.”
There is, however, a way to boost the odds, a relatively easy test to help diagnose when therapy is going badly. Known as routine measurement or feedback-informed therapy, it requires asking patients to fill out a survey – such as the one Ms. Keough used with her patient – on how they are doing, session by session. The approach is akin to the mental-health version of lab work, and more than a decade of clinical trials suggest that this approach can improve therapy and flag early when it’s going sideways.
We are socialized and trained to believe that our experience with a client will tell us what we to need to know. This is a real culture change.
— Robbie Babins-Wagner, chief executive officer of the Calgary clinic
In England – where a groundbreaking $630-million publicly funded psychotherapy program was launched in 2008 – these surveys are used at every clinic in the country, for every session. Meanwhile in Canada, measures to track progress in therapy have been slowly gaining ground, with some clinics such as the Calgary centre adopting the approach as far back as 2004. Measuring progress will also be part of new programs in Quebec, which announced plans last December to spend $35-million a year to expand public coverage of therapy, and in Ontario, which is piloting a small scale version of the British program.
“The adoption of vital sign-metrics is what pulled medicine out of the dark ages two centuries ago,” says David Ross, the recently retired manager and national clinic co-ordinator of mental health centres for Veterans Affairs Canada, which has introduced a system of routine measurement similar to the one used by the Calgary clinic. “It’s about time we did the same with mental health.”
Advocates such as Dr. Ross compare the questionnaires to clinical tests used in other areas of medicine, and say their wider adoption could transform therapy. If you were being treated for diabetes, the thinking goes, would you keep seeing a doctor who didn’t test your insulin? Or a cardiologist who never consulted a heartbeat monitor? Why is treating mental illness any different?
Except that treating mental illness is different. Many of the more than seven-million Canadians who will experience mental illness this year will have to pay for the help they need – spending many months and hundreds, even thousands, of dollars on therapy, the treatment most people prefer to medication. Meanwhile, mental illness devastates families and workers, takes billions out of the economy, and leads to a larger, long-term burden on the health-care system. Depression and anxiety are the most common reasons why people miss work – the cost of these in terms of lost gross domestic product is $50-billion annually in Canada – and getting the right treatment early is important to recovery. Having an untreated mental illness makes managing more chronic health problems more complicated, and costly. In fact, a 2017 Canadian study found that providing publicly funded therapy to those who need it would more than pay for itself.
So it’s vital to make sure that therapy gets the best results possible. But most therapists, especially in private practice, work with little scrutiny or supervision, seeing who they want and offering treatment as they choose. It’s not that different in public clinics, which can often say how many people walk through their doors, but not how many get better. Routine measurement, advocates says, is not only a way to improve care for patients, but will also make the system more accountable.
That idea makes many therapists uncomfortable. They say the surveys will take too much time and are too intrusive, that they’re a paint-by-numbers approach to a delicate art. They don’t need an instrument to help patients, these critics say, they are the instrument.
“We are socialized and trained to believe that our experience with a client will tell us what we to need to know,” says Robbie Babins-Wagner, chief executive officer of the Calgary clinic. “This is a real culture change.”
What happens next could wrench open a long-closed door, and transform the practice of therapy.
On a Monday lunch, at the Royal Ottawa Mental Health Centre, David Clark is offering statistics to a packed room of hospital staff. Dr. Clark, an Oxford psychology professor, is credited with creating the therapy program in England. He recently co-authored a paper in the prestigious medical journal, the Lancet, making the case for other countries and clinics to adopt “a similar approach to data collection and reporting.” Now, he is here in Canada to explain how the program works; the following day, he is off to meet with the Ontario government.
Since 2008, the British government has created more than 60 centres across England, offering cognitive behavioural therapy to citizens with depression or anxiety. The system now sees about 950,000 adults a year, and 60 per cent receive a course of therapy. (This is still, Dr. Clark notes, only 16 per cent of the people who are believed to actually need treatment – the goal is to reach 25 per cent by 2021, as well as to expand the types of therapies offered.)
Speaking to an attentive crowd at the Royal, Dr. Clark clicks through tables and charts, filled with data as recent as last December. Nationally, among those clients who come for at least two sessions, 51 per cent recover, another 16 per cent improve, and 6 per cent get worse. He points to a graph tracking how recovery rates have risen steadily over the past eight years. Based on the data collected, he says, the system now knows that recovery is less likely if people wait longer than six weeks for their first appointment, or quit therapy too early – so they’ve made those issues priorities.
He knows all this because session-by-session outcomes are collected for 98 per cent of people who receive therapy in the system. Every month, each centre is required to submit detailed statistics to a national database, and their results are published four times a year.
Dr. Clark is a pragmatist – to make the case for public-funded therapy, he went to one of the most high-profile economists in Britain, Richard Layard, at the London School of Economics. Together, they did the math to show how the cost would pay for itself in reduced physical health-care expenses and savings when people went back to work. Dr. Clark and his team also promised that the system would measure its results, volunteering to be more publicly scrutinized than the rest of the health-care system is. This appeased the bean counters. But ultimately, he says, measuring outcomes would also improve the therapy itself.
Providing therapists – and clients – with session-by-session progress measurements has been found, in research, to improve results, because it catches earlier when therapy isn’t working, which can then prevent people from dropping out. In clinical trials, for example, routinely measuring progress has been found to double positive outcomes for clients who weren’t improving in therapy, and cut by more than half the number of people who were getting worse.
Almost all therapists think they can predict negative outcomes, and they are horrible at it. They can hardly identify a single case when asked to do so.
— Michael Lambert, the American psychologist who designed the survey currently being used at the Calgary clinic
“If you don’t have the measure, you don’t have the fine-grained analysis of what is changing,” Dr. Clark explained in an interview with The Globe and Mail at the Royal. “You could get it by asking lots of questions, but then you would spend most of your therapy session just getting that information and less time dealing with it.”
That doesn’t mean progress monitoring works in every case. A 2016 clinical trial found that client feedback surveys didn’t help – and may have hurt – severely ill patients arriving through emergency rooms for treatment; researchers hypothesized that since these patients often waited a long time to get help while their symptoms worsened, asking them to reflect on their own condition may have been harmful. A 2017 clinical trial also found that client feedback produced the same recovery rates as a non-feedback control group for people being treated for eating disorders in group sessions.
Even experts who argue the standardized progress monitoring should become regular practice admit that, like any medical test, it has limitations. The right measure needs to be used, for example, and therapists need to be trained on how to read the results, and incorporate them into therapy.
But, while advocates acknowledge the limitations, they see it as a chance to improve results, and make the system more accountable to patients. Various studies suggest that roughly 50 per cent of people finish therapy with little improvement, and about 8 per cent may leave therapy even worse off than before. (If that sounds low, it’s about the same as antidepressants, although therapy’s benefits have been found to last longer than drugs once treatment stops, and with significantly fewer side effects.) Still those statistics haven’t improved much in several decades.
Therapists, however, have a more rose-coloured view of their own skills: in a 2012 U.S. survey psychologists estimated, on average, that 85 per cent of patients improved in their care. This optimism extended to their professional stature as well: one quarter ranked themselves in the top 90 per cent when compared to their peers, and none admitted to being below average.
“Almost all therapists think they can predict negative outcomes, and they are horrible at it,” says Michael Lambert, the American psychologist who designed the survey currently being used at the Calgary clinic. “They can hardly identify a single case when asked to do so.”
Obviously, some clients would eventually recover without feedback measures. The question, experts say, is which ones? Overestimating success rates is an all-too-human bias – psychologists themselves call it the “better-than-average effect” and it’s hardly unique to them.
But an oncologist can rely on a CT scan to show when chemo isn’t working. Tony Rousmaniere, a University of Washington psychologist and researcher says many private therapists are going on memory and note-taking.
In his book Deliberate Practice for Psychotherapists, Dr. Rousmaniere describes how those faulty tools led to his own therapeutic blind spots, and makes the case for more objective measures. During his clinical training, he describes feeling as if he was flailing in the dark with too many clients, and making excuses for the ones who dropped out. He was devastated when a single mom with a young son died of an overdose while he was struggling to help her in therapy. But experience, he came to see, was not the fix: his clinical supervisors were relying on his own biased interpretations to give advice. He needed better information to both catch and learn from his mistakes, and to get real help from colleagues, so he began to deliberately collect it – videotaping sessions, following up with patients after they had stopped therapy – and documenting client feedback. While researchers are still trying to discern why some therapists – so-called “super shrinks” – are so much more successful than their peers, many experts believe that one reason is that they often use these tools.
There are different surveys, of varying lengths and for different illnesses, that therapists can use for measurements, some with more evidence behind them than others. Dr. Lambert’s is a 45-question checklist that covers areas such as alcohol use, sleep patterns, and social and workplace relationships. The Calgary clinic began using it with paper and pencil, but has now switched to iPads, so therapists began receiving automatic alerts that flag specific issues. They also use a growing database of client response to predict whether treatment is on track. Dr. Babins-Wagner says the survey is not meant to replace a therapist’s judgment, but help guide it.
Efforts to use data to assess therapeutic results – and even predict outcomes – go back as far as 2005. A study from that year published in the Journal of Clinical Psychology, that followed 332 clients, found that therapists overestimated how many people recovered and accurately predicted only one would decline in treatment; the computer algorithm overestimated how many clients would decline, but accurately predicted 20 out of the 26 cases that actually got worse. Not perfect, but better than guessing.
Given the cracks in clinical judgement, David Ross says every therapist should be able to clearly explain how they will know if a client is on the path to recovery, or that their therapy is working. “If they can’t,” he advises, “head for the hills.”
At that third appointment, in Calgary, Ms. Keough showed her young male client his scores, and put the question to him: what are we missing here? Clients not seeing any benefit from therapy often just stop coming – they blame themselves or they are reluctant to confront their therapist with a complaint.
Ms. Keough worried that if she didn’t get it right soon, her client would lose hope and drop out – as research suggests often happens. He was struggling at work, so his job was also at risk.
She suggested to him that therapy was not tackling the true cause of his depression, and he admitted something new: he was convinced that his co-workers didn’t like him. “He didn’t realize how lonely he was,” she recalls. They began to work on the “art of friendship,” on how to start a conversation, and how to handle rejection.
Adding progress measures was a big change to how she worked, Ms Keough says, but the data has become essential to her job. “I felt when I was guessing, I could fill in the blanks any way I wanted. I could make it the clients’ problem – they weren’t ready, they weren’t capable, they were distracted. I never really made it my fault, my responsibility.”
Measurement can contradict a therapist’s first impression. Ms. Keough recalls the case of a couple who came to see her. The husband, who was ruminating about suicide following a drunk driving charge, was her client. But when his wife also filled out the survey, she was even more depressed than her spouse. Ms. Keough was able to offer her help from the first session. Without measurement, she says, she would have overlooked her silent suffering.
For reticent clients, routine measurement can be a way to highlight problems without having to do it face to face with the therapist, or to see for themselves when they are making progress. “It was great for my self-esteem,” says Jennifer Carter. While on stress leave from her public service job in Ottawa, she found a therapist who regularly surveyed how she was doing, and showed her the results on a chart. “When I made progress, I could see it, and for me that’s huge, to see on paper that there was legit evidence that I was getting better.”
Monitoring outcomes also prevents a therapist from pursuing areas that aren’t relevant to the client, Ms. Keough says. A client, for example, might mention a past sexual assault, but not feel the need to delve further into it. “Not everyone who has suffered trauma is damaged, but when a therapist hears that, they think, ‘I have to get in there and help you.’ You end up creating problems. It happens all the time,” says Ms. Keough.
Feedback may also flag when patients might need more intensive help, such as medication or when they don’t need therapy at all. Every clinic has what are called “fat files,” people who have been coming for years, but never progressing, as if the therapist were, as one put it, “an expensive friend.” But therapy is a clinical treatment, not a chat over coffee. Certainly, a publicly funded system wouldn’t want to pay for that.
Advocates of progress monitoring say it’s patient-focused care. Rather than capping sessions at an arbitrary number – as often happens when people are paying with limited insurance plans, or accessing some public programs - progress monitoring provides evidence for therapy as needed, including longer-term care for complex cases. And it doesn’t limit the kind of therapy used, or how it is delivered, allowing better tracking, for instance, of remote clients receiving online treatment.
But not everyone in the field is so enthusiastic. In 2004, when the Calgary clinic first announced the decision to use progress monitoring, 40 per cent of the therapists quit. A couple more left when, in 2008, it was made mandatory for every appointment, as long as a client was willing (and few refuse, insists Dr. Babins-Wagner). The Calgary clinic is not the only place to see this reaction: In 2012, when British Columbia’s Responsible and Problem Gambling Program began requiring routine measurement for its counselling services, many of its contract psychologists also quit.
Medicine is, by nature, a conservative field – as Dr. Lambert likes to point out, it took 200 years after the thermometer was invented for doctors to begin using it regularly. Progress monitoring is a decades-old concept, and yet in a 2014 study of 1,668 registered Canadian psychologists, only 14 per cent reported, even occasionally, measuring their clients progress. Two-thirds said they were unfamiliar with the approach.
No tool is perfect. And no tool covers everything. But a good tool is better than not tool at all.
— Robbie Babins-Wagner, chief executive officer of the Calgary clinic
Kathy Offet-Gartner, a veteran psychologist at Calgary’s Mount Royal University, is plenty familiar with progress monitoring surveys, and she’s not a fan. If she was in private practice, she says, she would not use the surveys. But as a therapist working at the university clinic, she is required to use one that’s been designed for university students, at every third appointment.
She says therapists at the clinic have found that the surveys are often annoying to their clients, or misunderstood by the students who fill them out on a computer in the office. They can also distract from the real reason why clients are coming to therapy, by requiring therapists to discuss issues that aren’t relevant to them on that day, she says. At most, Dr. Offet-Gartner estimates, she has found the survey useful for two out of 10 clients. “I believe a person’s words and actions are far more reliable than an instrument that they lie on, they don’t understand, or they are impatient so they click the same answer over and over again,” she says.
Dr. Offet-Gartner gives the example of one student she saw for the first time who scored as having low distress on the measurement. “And yet, they were in here for an hour and 15 minutes, and cried for 30 of them. They were not in good shape, but no question captured it.” In this case, her client was having a problem with a friendship, but the survey only asks about intimate relationships and family. While she agrees the surveys “can serve a purpose,” particularly with novice clinicians, she suggests that the measurements try to squeeze clients into a standardized box and may prevent therapists from doing the harder work of listening, or asking questions to dig deeper with clients.
She rejects the idea that she’s missing important information without the surveys – if she is seeing clients regularly, she says, she will know if they getting worse.
“I can ask all the same questions within the confines of the relationship I have, in a language I know the person understands,” she says. “I have been doing this for a very, very long time. And I trust my judgment and my client’s judgment. They will tell me.”
As it happens, an analysis of outcome data has already produced some counterintuitive findings, many of them detailed in Dr. Castonguay’s book. This work suggests that junior therapists often perform as well – or better – than their supervisors, one theory being that they are more likely to question their approach, seek second opinions, and less likely to be biased by their previous cases. (And yet, Ms. Babins-Wagner says, people looking for a therapist are often advised to prioritize their experience.) It’s also challenged an assumption still taught to psychology students – that people naturally get worse in therapy before they get better. “This is old thinking, and it is simply not true most of the time,” says Dr. Babins-Wagner. The outcome-based research suggests that, like Newton’s first law of motion, many patients who get worse in the first few sessions tend to keep getting worse, she says. For instance, a 2015 study of post-traumatic stress treatment for first responders during Sept. 11 found those who deteriorated in the first sessions had poorer results at the end of therapy – prompting the study’s authors to suggest therapists should measure symptoms early and often to catch declines quickly.
Everyone who comes to the clinic is now encouraged to fill out a survey on an iPad, which is then charted over time, revealing trouble spots, and improvements. They are also asked, at the end of the session, to fill out a second survey to assess how well they feel the appointment went, including whether they felt respected by the therapist, if they covered the right topics, and whether the treatment plan is clear.
Using the data, Dr. Babins-Wagner and her staff can set goals and fill gaps in knowledge, and the measurements are consulted when therapists team up to brainstorm on difficult cases. The measurement results do not factor into performance reviews.
“No tool is perfect,” Dr. Babins-Wagner admits. “And no tool covers everything. But a good tool is better than not tool at all.”
This summer, the Canadian Psychological Association is set to release the results of a study on progress monitoring. The report is expected to recommend that monitoring both outcomes and client progress become a standardized approach in therapy, as well as a required part of accredited training programs. While the recommendations would still need to be approved by the board of the CPA, Giorgio Tasca, the University of Ottawa psychologist who chaired the taskforce, said he believes that progress monitoring should be required in the public system – with the condition that adequate funding is providing to give therapists the skills to use it. “I hope governments will take the lead on this,” Dr. Tasca says.
Veteran Affairs also uses routine measurements similar to the Calgary clinic at 11 post-operational stress centres in the country. The system cost the department about $50,000 to implement, says former manager Dr. Ross, although the bulk of that budget was spent on a private server required to hold the department-wide data, which would then be used to track overall recovery rates and improve services. The Calgary clinic, which treats nearly 8,500 people a year, spends about $15,000 annually to train new staff and students in routine measures. The clinic also built a $150,000 computer-based system that manages accounts and patient records and includes outcome measures. (Dr. Lambert says that an individual psychologist seeing 25 patients a week would pay about $250 a year for a licence to use the survey, plus the cost of an iPad or computer for patient use.)
There’s a strong argument for routine measurement to become a prerequisite of public coverage for therapy, says Martin Drapeau, a professor at McGill and the editor-in-chief of the journal Canadian Psychology – not only to improve treatment itself, but to provide governments with important information to guide budgets and highlight best practices. “You could say, if you want this to be reimbursed, to be part of the system, this [is] one of the conditions: you are going to track outcomes,” Dr. Drapeau says. “And if people aren’t comfortable with it, they can continue with their private practice.”
To become more accountable, Dr. Rousmaniere, the University of Washington psychologist and researcher, also believes that clinicians should make their overall results public. On websites such as psychologytoday.com, people looking for therapists can find out their education, licence, training and treatment type – everything, he writes, “except the most important thing: how effective that therapist actually is.” Dr. Rousmaniere isn’t just floating the idea: detailed annual outcomes of his own private practice are posted on his website, and he often discusses them with his clients. “The public has the right to know health data,” he said in an interview. “My clients should be able to get a rough idea of my performance as part of deciding whether to work with me.”
As for Ms. Keough’s young patient, “once we got on track, it did not take him very long to get better.” His scores steadily improved, and seeing this himself, says Ms. Keough, bolstered his confidence. She saw him nine more times. By then he was no longer clinically depressed, and he stopped therapy. Four months later, he called Ms. Keough to say he’d been on a date.