The patient sits in a worn, upholstered armchair. She’s been having a tough go lately—doubting herself at work, wondering whether her friends truly like her, spending increasing amounts of time in bed, taking fewer showers, and eating less.
The thoughts and feelings are familiar to her: she was diagnosed with a mental illness in her teens and has been hospitalized twice since. She knows it’s time to start talking to someone again, so here she is, in the armchair.
“How are you doing today?”
She inhales, sighs, and types: “OK, I guess,” into her phone.
Though this scene is imagined, ones like it, in which a patient engages with artificial intelligence trained to simulate a therapist, are established forms of treatment. While not yet widespread in North America, smartphone apps that use AI to treat mental health problems appear viable and offer distinct advantages for dealing with one of the world’s leading health issues.
The nascent technology for this is already available in several forms, such as chatbots that replace therapists and give patients tools for developing healthy coping mechanisms. Woebot, for example, is a Facebook-integrated bot whose AI is versed in cognitive behavioral therapy—a widely researched approach that is used in lieu of, or in conjunction with, talk therapy to treat depression, anxiety, and a host of other mental illnesses.
Clinical research psychologist Dr. Alison Darcy developed the AI-powered chatbot with a team of psychologists and AI experts. As she explained in a 2017 interview, the project was developed, at least in part, as an effort to increase access to treatment for those suffering mental health issues.
With Woebot, the user and chatbot exchange messages. This allows the AI to learn about the human and to tailor conversations accordingly. Based on the bot’s cognitive behavioral therapy learning, it then provides therapeutic tools deemed appropriate for the human user. Because this technology is integrated with Facebook Messenger—a platform with 1.3 billion monthly users and not bound by medical privacy rules—Darcy’s bot opens the door to mental health treatment for hundreds of millions of people who might not otherwise gain access due to lack of income, insurance, or time or out of fear of stigma. Because there’s no real-life human interaction, Darcy says that her innovation is not meant to replace traditional therapy but rather to supplement it.
Chatbots like Woebot actively engage users, but there are more passive forms of AI mental health therapy, as well. These include Companion and mind.me, which are apps that can be installed on a phone or smartwatch. Left to work in the background, their AI collects data from its user 24 hours a day and without direct input.
Companion was developed in conjunction with the U.S. Department of Veterans Affairs. Its design “listens” to the user’s speech, noting the number of words spoken and the energy and affect in the voice. The app also “watches” for behavioral indicators, including the time, rate, and duration of a person’s engagement with their device. Based on the understanding that early intervention can be life-saving for those with mental health issues, Companion was originally designed to flag known signs of mental illness in veterans and to share that data with the individual and his or her health care managers.
Other apps, such as Ginger, take a two-pronged approach to treatment by supplementing AI with human clinicians in the form of licensed therapists and certified psychologists, available to chat when necessary. Ginger uses AI—analyzing data gleaned through surveys and from app use—to help its clinical staff fine-tune therapies according to the needs of each individual client. Some examples of treatments include emotional health coaching, mindfulness, cognitive behavioral therapy, and talk therapy with a trained therapist.
But, like many emerging technologies, these innovations are imperfect. Privacy and mental health experts worry about the potentially deadly consequences of divulging deeply sensitive information online and wonder about the overall effectiveness of treatment.
“It’s a recipe for disaster,” said Ann Cavoukian, who spent three terms as Ontario’s privacy commissioner and is now the distinguished expert-in-residence leading the Privacy by Design Centre of Excellence at Ryerson University in Toronto. “I say that as a psychologist,” she explained in an interview. “The feeling of constantly being watched or monitored is the last thing you want.”
In an article in The New Yorker, Nick Romeo argues that there is little “good data” on the efficacy of AI therapy, due to the fact that it’s such a recent development. This view is echoed by NPR Massachusetts, in its report on a 300-person study of app-based therapy conducted at Brigham and Women’s Hospital. The director of the hospital’s Behavioral Informatics and eHealth Program, psychologist David Ahern, points out that, “There are tens of thousands of apps, but very few have an evidence base that supports their claims of effectiveness.”
Nevertheless, these applications and others like them offer unprecedented—and sorely needed—solutions to the overall lack of access to mental health care. According to national statistics in both Canada and the United States, each year one in five people experience a mental health problem or illness. Canada’s Center for Addictions and Mental Health puts the economic burden of mental health—including the cost of health care, lost productivity, and reductions in quality of life—at an estimated at $51 billion, annually. In the United States, the National Alliance on Mental Illness estimates the country loses $193.2 billion each year in earnings as a result of inadequate treatment.
Those staggering statistics are even more alarming when stacked up against the number of people who don’t receive treatment at all. In both countries, at least half of all adults experiencing mental illnesses go untreated. That is to say, they don’t receive or take medication or have any form of counseling. For some people, this might be a choice born out of a fear of what others—family, employers, colleagues, friends, and even doctors—might think. But in many places around the globe, the choice to seek medical help for mental illnesses is simply a pipe dream.
The Potential User Population is Enormous
In 2014, 45 percent of the world’s population lived in a country with less than one psychiatrist available per 100,000 people, according to the World Health Organization. The same report found that, worldwide, there were 7.7 nurses working in mental health for every 100,000 people. From a global perspective, access to treatment is so scarce, it could easily be considered a luxury.
Which is why AI and the apps it supports seem so promising. This technology can help overcome the problem of access while simultaneously mitigating stigma. Machines are not thought to be judgmental in the same way a human might be. Charlotte Stix, the AI policy officer and research associate at the Leverhulme Centre for the Future of Intelligence, in England, points out that offering people a way to find help without fear of judgment proves meaningful when trying to break down that particular barrier.
However, as is often the case with budding technologies, shimmering hopes can be tinged with possible drawbacks; in this case, it’s the possibility that society never overcomes the stigma of mental health. “A potential downside,” says Stix, “could be that instead of eventually receiving expert human support, patients stay with purely algorithmic solutions, and society pretends the problem is solved without actually dealing with the core issue at hand.”
Similarly, any benefits gleaned from the increase in access could be undermined by overreliance, questionable technology, and doubtful effectiveness of diagnosis and treatment. “As with any app, there will be those on the market that do not adhere to a certain standard and ought not to be used under any circumstance, particularly by someone in crisis,” Stix says. “There is a plethora of health care apps, fertility-monitoring apps, and so on, on the market with starkly varying quality.” In other words, she argues, there is already a divide between useful and potentially harmful health care apps, and there’s no reason to believe that this won’t apply to mental health care apps, as well.
Clearly, mental health care workers, like all humans, might also range in capability and potential; however, these practitioners receive training and are bound to certain codes, laws, and standards that can be enforced. By contrast, machines and apps that are meant to help people suffering mental health issues are not yet regulated.
Some of these technologies are peer reviewed, notes Glen Coppersmith, the founder and CEO of Qntfy, a company that analyzes personal data in the hope of improving the scientific understanding of human behavior and psychological well-being.
“That’s a low bar, but that’s already been done,” he says. “There’s a balance to be struck between innovation and finding better solutions to these problems.…But right now, I don’t think there’s enough information for [the government] to adequately write regulations for this.”
Researchers Confront Positives and Negatives
The benefit-drawback dichotomy is evident all over this emerging technology. Analyzing personal data through AI can bring objectivity to a historically subjective field—much as the thermometer did for body temperature, the x-ray for bone health, and MRIs for tissue damage. As IBM researcher Guillermo Cecchi notes, “Psychiatry lacks the objective clinical tests routinely used in other specializations.” This is why he and his colleagues used AI to develop a program that analyzes natural speech to predict the onset of psychosis in young people at risk.
“Novel computerized methods to characterize complex behaviors such as speech could be used to identify and predict psychiatric illness in individuals,” Cecchi’s team notes in an article published in Nature. IBM’s technology was, in fact, so successful that it outperformed traditional clinical assessments.
The efficacy of AI for diagnosis was also highlighted in Romeo’s New Yorker article. In it, Stanford University psychiatry professor David Spiegel said that, in time, an AI machine could, unlike a human, have perfect recall of every past interaction with a given patient, combining any number of otherwise disconnected criteria to form a diagnosis, “potentially [coming] up with a much more specific delineation of a problem.”
On the flip side, having all of a patient’s highly sensitive and personal data online threatens privacy. This problem concerns Stix. “If you choose to discuss your personal and mental health issues through an app, it can be unclear to what degree this information, and your sensitive data, is stored and used at a later point—and for what,” Stix says. “You may have signed a terms-and-conditions agreement before using the app, but these can be unclear and, particularly for people in a vulnerable position, might not suffice.”
Privacy questions loom large over people’s online activity, in general. For this specific technology, the critical questions relate to where each user’s data goes, how it’s used, and who owns it, said Cavoukian. “We have to be so careful with AI, because AI has amazing potential—there’s no question,” she says, “but people often talk about the potential for discrimination, for tyranny.”
Artificially intelligent technologies are built on trained data sets that create algorithms. But if the data sets are biased in certain ways—for instance, says Cavoukian, if they only take into consideration certain parts of the population—there can be dramatic and devastating implications.
“I always tell people, be aware of the unintended consequences. You don’t know where [your data] is going to end up,” she says. “If it’s going to end up in the hands of your employer or your insurer, it can come back to bite you. And you have no idea how that can play out.”
The Health Benefit Offsets the Privacy Risk
Years ago, Coppersmith and colleagues at Qntfy scraped data from publicly available posts on social media. But today, so many people are “donating” their data that scraping is no longer necessary. At Qntfy, each person who volunteers their data decides which accounts can be accessed—Facebook, Twitter, Reddit, Fitbit, or Runkeeper, for instance, and sometimes all of them.
“The privacy consciousness of the general public has forced everyone to be super upfront,” says Coppersmith. For Qntfy’s part, the user agreement is less than a page long and sets out in bolded words what each person is authorizing the company to do with their data. The privacy policy available online is comprehensive and stipulates that any data passed on to a third party is for research purposes or product improvement and is nonidentifiable.
While this particular company takes steps to protect the sensitive and personal information of individuals, there’s no immunity from data breaches. “It’s a legitimate concern,” Coppersmith says, when asked about the possibility of his company’s data sets being compromised. “But it’s no different from breaches at Facebook, Amazon, or anything else. If you’re still posting to social media, if you’re still doing online banking, you’ve made a choice—a risk-reward trade-off.”
In this case, as Coppersmith says, the trade-off is between the risk of a data breach that could lead to your personal data falling into the hands of unknown players and the benefit of having a better picture of your mental health. “If we’re able to give your clinician superpowers to better understand you,” asks Coppersmith, “is that worth the risk of perhaps your data being compromised at a certain point in time?”
But looking at the issue in an either-or, win-or-lose manner—assigning a conflict to it—is a false choice, says Cavoukian. There are ways to reduce the risk of personal data being reidentified to less than 0.05 percent. “Damn good odds,” she adds, explaining that with the right safeguards, a person would be more likely to be hit by lightning than experience a data breach.
Having that protocol and, most importantly, designing privacy “as a default” into technological developments—ensuring the information an individual provides is strictly used for the intended purposes—is imperative, according to Cavoukian. Until companies make it abundantly clear they will only use data for the agreed purposes, individuals will have to take responsibility for their own privacy. “We can do both. We have to do both,” Cavoukian asserts. “I want lives saved and privacy protected.”
While AI continues to make strides into almost all aspects of a human’s daily life, there remain many questions about its validity, biases, and effectiveness. There is perhaps no area more private or vulnerable than an individual’s mental health, and it remains to be seen whether letting an intelligent machine into that space will help or hinder. But, says Coppersmith, expect that involvement to intensify.
“I would bet the amount of influence that AI is going to have over mental health is going to increase,” he says. “But I would be shocked if it ever totally replaced that human connection.” Because in its most basic sense, as Coppersmith observes, mental health is shaped and affected through interactions with the world—with humans.
Amy Minsky has worked as a journalist in Canada for more than a decade. Until recently, Minsky was based in Ottawa, covering politics and policy on Parliament Hill. This article is the third in our series on the accelerating impact of AI, and was first developed for the pilot issue of Mai magazine (see maimedia.org).
—-it’s the possibility that “society” never overcomes the stigma of mental health.
Have you overcome believing it is a better direction? You cannot blame society for everything.
That is correct. One cannot blame society for everything. Dr. Philippe Rushton believed behaviour is on average only 20% learned and 80% genetic. This idea runs afoul of many in the discipline who favour nurture over nature, but yet acknowledge the existence of certain feedback loops occurring between the two influences. However, the idea that behaviour is on average 80% genetic is denied by most. For if you accept such a massive genetic propensity for behaviour, that our genetics control our behaviour by large extent, what then happens to your talk therapy private practice? Not to mention trying to save face for the profession which has heretofore pinned behaviour almost exclusively on societal influences and their tabula rasa.