A few days after the death of Anthony Bourdain last summer, a suicide researcher at the Royal Mental Health Centre in Ottawa downloaded four years of public tweets by @Bourdain onto the computer in his office.
Zachary Kaminsky was certainly not the only one parsing the stream of Mr. Bourdain’s personal tweets for clues to why the the famed chef, author and television travel personality had taken his own life. But he had a unique tool – a computer algorithm designed to identify and sort tweets by characteristics such as hopelessness or loneliness to reveal a pattern only a machine could find.
In simple terms, the algorithm, still in the early trial stage, charts the mood of the person tweeting in real time, with the goal of using that information to peer into the future. The idea being tested by Dr. Kaminsky, and researchers around the world, is whether artificial intelligence can predict suicide risk in time to intervene. Had Anthony Bourdain, even unconsciously, left clues of the tragedy to come?
Suicide is a rare event – in Canada, the annual rate is 11 deaths per 100,000 people. Still, every day, 10 Canadians die by suicide, and in recent years those numbers appear to be slightly rising. In many cases, there are signs of trouble brewing. The majority of victims suffer from a mental illness. They have usually sought help from a doctor or visited an emergency room. Many have families trying desperately to get the right help. They exist in the system, as data points, medical results, appointments and paperwork. Marshal all that information with a clever algorithm and epigeneticists such as Dr. Kaminsky propose that lives might be saved.
According to a 2016 study published in the journal Psychological Bulletin, which analyzed more than 350 studies going back 50 years, no single identified risk factor, or a standard set of factors, predicted suicidal thoughts or behaviours better than random chance. Even depression, one of the most significant risk factors, is a poor predictor on its own, the study found – most people get better, and the rate of suicide among those with depression is roughly the same as the general population.
To truly predict suicide, the authors argued, would require the consideration of hundreds of risk factors, a daunting task for a clinician. Even then, this would only settle the “who” part of the prediction. The “when” would likely require thousands of data points for a single patient – a calculation only possible for a properly trained machine. As Joseph Franklin, a psychologist at Florida State University and the lead author of the study says, it’s the difference between knowing that it’s hurricane season, and accurately forecasting that one will strike in two weeks, leaving enough time to prevent a disaster.
For Suzanne and Raymond Rousson, time ran out on January 22, 2017, when their 27-year-old son, Sylvain, was pronounced dead following a suicide attempt. The health-care system, his mother believes, missed its chance to save him.
A few days before Christmas, her phone rang. “Please come, Mom, and rescue me,” Sylvain begged. He was worried he was about to harm himself. His parents called 911 and raced to the emergency department of the Ottawa Hospital. “Now,” they thought, “he will get the help he needs."
But two hours later, he walked out to the waiting room, released. The Roussons still don’t know what he told the doctors, or what they asked him. “All we know is that they talked to him for a bit, and sent him home,” Ms. Rousson says. If the doctors had asked, his parents would have said that Sylvain’s depression had been getting worse for months, that he’d stopped working as a plumber, lost weight and refused to eat, except for the chocolate mini eggs his mom bought in bulk. But they never got a chance. Sylvain was gone four weeks later.
“My son,” Ms. Rousson says, “should still be alive.”
The situation is complicated – hospital emergency departments are busy, especially near the holidays. Sylvain was an adult, who required no guardian in the room; perhaps he spun a tale of wellness. But it’s clear, Dr. Kaminksy says, that overtaxed emergency-department physicians and family doctors – the first point of contact for many patients with mental illness – need reliable, real-time tools to better assess risk. Ideally, he says, these tools would identify people such as Sylvain in the system well before they show up in emergency. “What if these tests can redraw the front lines,” he suggests, “and get people to the services they need before they are in crisis? I think this is where we need to go."
Even before Dr. Kaminsky began exploring the role of algorithms in identifying suicide risk, he was working on another piece of the puzzle. He and a team of researchers at Johns Hopkins University identified a connection between changes to a gene called SKA2 and a higher suicide risk. Those changes could be found in biomarkers in the blood, raising the prospect that a blood test could identify risk, as well as track over time whether treatments such as medication were working.
After a study outlining these findings was published in 2014 in the American Journal of Psychiatry, Dr. Kaminsky received e-mails and phone calls from family members grieving a loved one’s suicide offering to send hair or blood for analysis. In one, a worried mother listed family members who had taken their lives – a grandfather, an uncle and, most recently, one of her sons. Would Dr. Kaminsky test the blood of the son she had left?
“We couldn’t do that,” he says. The research on the biomarker was too preliminary, and it wouldn’t have been ethical. But that e-mail, in particular, stayed with him.
What other clues could research uncover?
The line graph that Dr. Kaminsky’s computer created for Anthony Bourdain tells a clear story. Over 250 days, according to the algorithm, his mood goes up and down, spiking in a couple spots over the baseline score for suicidal thoughts – what experts call “ideation” – but always dropping out of danger a few days later. Then, around 20 days before his death, the line begins to rise steeply, and it keeps rising, as if climbing a skyscraper, until the fateful day of his suicide. In those final weeks of tweets, Mr. Bourdain defends the women making charges against Harvey Weinstein, mocks the royal wedding and grumbles about criticism regarding a Newfoundland segment on his show. There’s nothing particularly remarkable. But according to Dr. Kaminsky’s algorithm, the machine found tragedy brewing.
This doesn’t mean the algorithm works every time – these are preliminary tests, Dr. Kaminsky says. The AI program uses a machine-learning approach designed to improve as it’s trained with more examples. It recognizes subtle patterns in words related to mood – not only phrases such as “I feel lonely,” but also “my friends are ignoring me.” The more phrases, the better it works.
In Mr. Bourdain’s case, the computer had hundreds of tweets to analyze. While using social media has the advantage of immediacy, it also assumes the breadcrumbs have been left in the first place.
Still, the idea of using personal data to better identify health risks has inspired a new round of research, both public and corporate. If computers can anticipate our interest in a new television, surely all that free-flowing information can be corralled for more altruistic purposes?
Facebook has been using artificial intelligence since November, 2017, to locate phrases that may be signs of distress – for instance, one user asking another, “Are you okay?” – to send pop-up ads about suicide hotlines or highlight ways people can respond when they are worried about someone, by prompting them to ask certain questions, report the incident to Facebook or call for help themselves. The approach is meant to provide support, not predict individual behaviour, explains Kevin Chan, the head of public policy at Facebook Canada. But in extreme circumstances where harm appears imminent, Mr. Chan says, Facebook moderators have contacted emergency services, though how often this has happened he declined to say.
Other data trails may prove even more useful than social media. Florida State University’s Dr. Franklin was part of a team that created an algorithm that used anonymous patient electronic records, including details such as appointments, prescriptions and emergency-room visits, for two million patients in Tennessee to predict suicide attempts. According to a 2017 paper published in Clinical Psychological Science, the algorithm was more than 80-per-cent accurate at predicting someone would attempt suicide within two years, and 92-per-cent accurate within one week.
But one tool is not likely to be enough. Most experts say predicting suicide – especially down to a precise time frame – will require multiple data sources, such as patient records, clinician questionnaires, social-media activity and blood tests. Dr. Franklin suggests that level of precision is still a decade away.
This leaves some time, then, to resolve concerns about privacy, how such information would be used, particularly by employers or insurance companies, and whether it would be available to worried family members watching over a loved one. Dr. Kaminsky envisions machine learning being used by front-line health-care workers and therapists to tailor treatment. Should mental-health algorithms become the domain of social media companies, with their reams of personal data?
“Are we getting into Minority Report territory?” asks Juveria Zaheer, a clinician scientist who studies suicide at Toronto’s Centre for Addiction and Mental Health, referring to the Tom Cruise movie in which crimes are predicted before they happen, with a false prediction forcing the hero on the run. “We still live in a world where mental health can be quite stigmatizing. Picking people out and saying, 'You are at risk of suicide based on X and Y,’ what is the effect of that?”
The algorithms aren’t perfect – for instance, to avoid missing anyone, they err on the side of caution by sometimes clumping the wrong people into the suicide-risk group. Even so, as the researchers involved point out, they are still significantly more accurate than average clinical assessments by humans.
But Alexander Niculescu, a psychiatrist at the Indiana University School of Medicine, argues that may be because clinicians are relying too much on gut feelings and not enough on objective information they can easily collect during patient appointments.
Therapy guided by data-driven patient feedback – using standard questionnaires and checklists – is also a growing trend among psychologists. In Dr. Niculescu’s work, a 22-question suicide checklist – which never uses the word suicide – has been found, in trials, to approach accuracy percentages in the 80s, for psychiatric patients, an emergency-room population and members of the U.S. National Guard.
What’s more, Dr. Niculescu’s lab has discovered additional biomarkers beyond those tied to SKA2, allowing for the assessment of suicide risk using blood tests. In a small-scale study, when the app checklist was combined with a blood test, the accuracy rates were even higher.
This research has limitations, but Dr. Niculescu points out that, while they are an exciting new avenue, machine-learning experiments also have their own potential weaknesses – what is predictive for the data set used to train the computer, he suggests, doesn’t necessarily work as well for random populations. Machines also need a lot of data to get predictions right in the general population, such as the massive amounts accessible to Facebook or Google.
Still, even Ottawa is now investing in machine learning. The Public Health Agency recently put out a contract call for a company to use social media and machine learning to detect patterns in suicidal behaviour among Canadians. The project could explore whether certain events put people at risk, or what factors may increase hopelessness in different demographic groups. The data collected would be anonymous, taken only from public social-media postings and could potentially be used to design suicide-prevention campaigns and programs, a public-health spokesperson said.
But aside from his skepticism that a machine will be able to ever perfectly predict as complex an event as suicide, Mark Sinyor, a psychiatrist at Sunnybrook Health Sciences Centre in Toronto, questions whether the goal misses a larger point: Everybody thinking about suicide is in distress and needs treatment. The key is to identify and decrease the risk factors based on the individual patient. It is important for people to understand, he says, that "suicide is preventable and there is hope.”
A tool is only helpful, he says, if clinicians are still able to question its results against the actual person sitting before them. And even then, a tool is only useful if the health-care system has the resources to provide good care for the patients in question – not only when they arrive in crisis, but, ideally, early enough to prevent them from reaching that crisis in the first place.
Ultimately, as with other areas of medicine, the search is on for a personalized treatment for suicidal urges, one designed to address the unique circumstances, social context and biological factors that lead someone down that dark path.
Last year, Dr. Kaminsky received an update from the mother who had so desperately sought a blood test from her son. Her family had lost another loved one to suicide. She wanted to know: Was Dr. Kaminsky any closer in his work?
“Science moves slowly,” he says with a sigh.
One way or another, it will likely fall to a machine to speed it up.
If you are having thoughts of suicide, call Kids Help Phone at 1-800-668-6868 or Crisis Service Canada at 1-833-456-4566, or visit crisisservicescanada.ca.