top of page

"Inspiring Minds, Changing Lives."

Pink Poppy Flowers

Top seller

Join Us – Become a Member Today!

Individual
Professional
Business

AI, neuroscience, and data are fueling personalized mental health careNew technologies integrate mobile device data and brain scans to deliver individualized treatment




  • Psychologists are using a patient’s brain scans plus data from phones and wearables to determine the best intervention before beginning treatment—bypassing trial-and-error and improving outcomes.

  • During the course of therapy, AI tools can analyze vast amounts of patient data from apps that track sleep and movement, for example. These analyses help therapists and patients identify patterns, provide more timely guidance, and steer therapy decisions.

  • Generative AI chatbots like Therabot deliver personalized mental health support when symptoms spike, offering scalable care amid provider shortages.



In the past several years, researchers have started pioneering strategies that utilize personal data from phones, watches, and fitness trackers as well as health records and brain scans to more accurately select the most effective treatment for individuals—essentially bypassing the trial-and-error phase. This data can include metrics about everything from sleep and social connections to brain circuitry patterns and suicidality. Psychologists are also starting to explore how AI could use this personal data—shared only with the individual’s permission—to help people identify patterns that might otherwise go unnoticed. These discoveries can in turn help clinicians and individuals pinpoint the ideal evidence-based solution for an individual experiencing anything from panic attacks and insomnia to depression and anxiety.



Historically, psychologists have relied on patients’ self-reported symptoms and history to diagnose mental health conditions, and this is often followed by a potentially lengthy period to determine which treatment—if any—improves the symptoms. This approach has delayed relief for many people with a variety of mental illnesses, and those with limited access to providers were often at even greater risk of negative outcomes.

“These new solutions combine the promise of precision treatment with the power of personalized care through AI,” said Zachary Cohen, PhD, director of the Personalized Treatment Lab and an assistant professor of psychology at the University of Arizona. “This has the potential to bring scalable, evidence-based, just-in-time treatment to individuals throughout the nation and world.”



Patients could use these innovations in collaboration with clinicians, and, for those who struggle to access care, these tools could provide tailored treatment through their devices. With more than 50% of psychologists reporting that they did not have openings for new patients in a recent APA Practitioner Pulse Survey, researchers leading these efforts feel a sense of urgency to design tools that are both safe and effective to serve people who lack access to care. “The reality is that we cannot train enough clinicians to meet the needs that exist today,” said Cohen. “We have to explore things that can help fill the gap, and failure to do so is irresponsible.”



The power of passive sensor data

Although smart devices are constantly collecting a wide variety of data about users such as movement, talk time, and heart rate, this information is not typically leveraged to help people address mental health conditions. Now researchers are using large language models (LLMs) to synthesize multiple data streams and identify potential behavioral health concerns. “Historically, therapists have been limited to what a patient remembered on a particular day,” said Margaret Morris, PhD, a clinical psychologist and an affiliate associate professor in the Information School at the University of Washington. “Now we are exploring how to bring outside life into treatment to help patients understand patterns in their lives and the choices they have.”



Morris recently collaborated on a study in which ChatGPT analyzed sleep, steps, incoming and outgoing calls and texts, distance traveled, time spent at home, and more to highlight clinically relevant insights for mental health professionals (Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol. 8, No. 2, 2024). The researchers expected that clinicians would be interested in using this information to establish a working diagnosis before meeting with a new patient. Instead, clinicians in the study expressed a desire to use the tool in collaboration with patients to discover patterns in behavior, integrate those insights with the concerns that brought patients to therapy, and inform treatment.



For example, with this approach, an individual struggling with anxiety could see that insomnia and anxiety peaked on the days they did not leave home. “The therapist could invite the patient to see the downsides of avoidance as a coping strategy, specifically how staying home might remove opportunities for contact and exercise that could potentially alleviate anxiety,” said Morris. They could explore small shifts to the patient’s routines, perhaps starting the day at a café or with a walk. In later sessions, they could talk about how the experiments went, what feelings came up upon leaving the home, and note any changes in anxiety, sleep, and other metrics.



Sensor data are also being used to detect when someone has been a target of discrimination, which can increase risk of suicidality, substance use, poorer academic performance, and other negative outcomes (Cheref, S., et al., Suicide and Life-Threatening Behavior, Vol. 49, No. 3, 2019; Desalu, J. M., et al., Addiction, Vol. 114, No. 6, 2019). Researchers at the University of Washington found that after unfair treatment, students spend more time off campus and less time indoors on campus, which reflects a pattern of social withdrawal from campus life. They are also more active in the evening and later at night. The data also suggested that problematic phone use is more common after experiencing discrimination, such as longer screen times in the afternoon. “Psychologists are helping us understand how to convey the detection to students and what the interventions might look like,” said Anind Dey, PhD, a professor in the Information School at the University of Washington and author of the upcoming study.



Treatment just in time

The latest advancements in AI technology are not only tailoring interventions based on personal sensor data but pinpointing the ideal time to provide support. Cohen and his collaborators are applying for funding for a study that predicts who is at risk of depression using data about heart rate, physical activity, sleep, mood, and more. If the data suggest that the risk is high, the individuals would have access to a preventative digital therapy supported by a chatbot specifically designed by psychologists and psychiatrists.

A team at Dartmouth started developing this chatbot, known as Therabot, in 2019, and recently published results of the first clinical trial of a fully generative AI chatbot. The study found that the software led to significant improvements in symptoms for people with major depressive disorder, generalized anxiety disorder, or at high risk for an eating disorder (Heinz, M. V., et al., NEJM AI, Vol. 2, No. 4, 2025). “Our current diagnostic models paint a picture that there is a constant flow of symptoms, but symptoms can change rapidly day to day or within a day,” said Nicholas Jacobson, PhD, a clinical psychologist and an associate professor of biomedical data science, psychiatry, and computer science at Dartmouth College. “If you can monitor and predict ebbs and flows in symptoms, then you can deliver digital interventions at the right time.”

While digital therapeutics (DTx) that deliver evidence-based solutions via text are not new, studies have found that users often lose interest in these tools because the technology is not as personalized and engaging as a generative AI application (Nwosu. A., et al., Frontiers in Psychiatry, Vol. 13, 2022). Chatbots such as Therabot have the potential to improve the effectiveness of digital therapeutics by integrating human-like conversations with users into these tools. And unlike a general LLM tool like ChatGPT, Therabot—which was monitored by a mental health clinician—is evidence based and is less likely to agree with what the user says, as with some recent tragic cases.

For example, if someone with anxiety tells Therabot about feeling nervous and overwhelmed, Therabot could validate that these emotions are common and invite the user to describe a specific triggering situation that brings up these feelings. If the individual mentions fear of being judged during an upcoming presentation, Therabot could introduce a graded exposure exercise that would help the user gradually confront the feared presentation: “Let’s start by visualizing the situation for 30 seconds then gradually increase the duration or detail.” Therabot could then encourage the individual to pause, identify the thoughts driving this fear, and ask if there is evidence that they’ll be judged. It could then guide the person through cognitive restructuring, followed by a brief rehearsal of the presentation.



Therabot was trained on most major clinical problems, including serious mental illnesses, though it would likely be limited in cases where psychotherapy tends to be less effective, such as schizophrenia, said Jacobson. The results of the study of Therabot users showed that people diagnosed with depression experienced a 51% average decrease in symptoms after using the tool for 8 weeks, and people with generalized anxiety disorder experienced a 31% reduction in symptoms. Participants at risk for eating disorders showed a 19% average reduction in concerns about body image and weight. These results are comparable to cognitive therapy outcomes with outpatient providers. The findings also suggest that people felt a collaborative bond with Therabot. “The therapeutic alliance was high and neared what norms look like for the outpatient setting,” said Jacobson, senior author of the Therabot study.



His team is involved in a new national AI Institute funded by a $20 million grant from the National Science Foundation. The researchers plan to implement AI into devices and wearable sensors that can provide users with personalized assessment and intervention. In the first year, they will focus on interventions for major depressive disorder. In the second year, they’ll explore how physiological, environmental, and neural connectivity data can help prevent relapse and support long-term recovery for people with substance use disorders.



Peeking into the brain to personalize treatment

While generative AI primarily uses text-based LLMs, the latest large multimodal models (LMMs) combine text, images, and audio to understand data in a more comprehensive way. Sensor data could eventually be combined with data from brain scans and other health records to personalize treatment selection for an individual.

Smartphone data, processed and analyzed using an LMM, could help someone become aware that their severe sleep problems, slow information processing, and difficulty making decisions could be predictive of changes in brain circuits associated with specific mental health conditions. Researchers at Stanford University have studied fMRI data on thousands of patients with depression and identified at least six “biotypes” of depression each linked to distinct patterns of dysfunction in brain circuitry.

The LMM could then help identify individuals with the cognitive biotype, a form of depression characterized by reduced activity in the cognitive control circuit. Generative AI could be trained to help guide these individuals to the most effective treatment pathway and connect them with the right providers. “Currently we expect patients to figure out on their own whether they should go to a primary-care doctor, psychiatrist, psychologist, or other mental health specialist,” said Leanne Williams, PhD, a professor of psychiatry and behavioral sciences at Stanford University and director of Stanford Medicine’s Center for Precision Mental Health and Wellness. “This is an exciting chance to transform outcomes by helping people achieve remission earlier, which ultimately means saving lives.”



In a new study, she found that precision biotyping, or using each person’s unique brain profile to identify who met criteria for a cognitive biotype, led to significantly higher rates of remission compared with usual care with conventional antidepressants (Nature Mental Health, in press). People with the cognitive biotype received a drug used to treat hypertension that boosts activity specifically in the cognitive circuits in the brain. In the study, 86% of participants achieved remission, which means their symptoms are in the healthy range. Williams’s team is now testing a different circuit biotype treated with pramipexole, which is used for restless leg syndrome and Parkinson’s disease and boosts activity specifically in the reward circuits in the brain.

Although personalized data could usher in an era in which people can access tailored treatment from clinicians, chatbots, or a combination of the two, more research is needed to evaluate whether the enthusiasm is supported by clinical results. “AI tools can be used to increase access to personalized support, but there are significant ethical and safety questions with this type of precision mental health that we must continue addressing,” said Kate Bentley, PhD, an assistant professor of psychology at Harvard Medical School and director of the Suicide Prevention Research Program at the Massachusetts General Hospital Center for Precision Psychiatry.



In the Therabot study, the team had to be equipped to intervene immediately if a patient expressed acute safety concerns, such as suicidal ideation, or when the chatbot did not respond with best practices, Jacobson said. The U.S. Food and Drug Administration (FDA) regulates treatments such as antidepressants, and while some DTx such as DaylightRx and Rejoyn have already been cleared by the FDA, the study evidence requirements for these products can be inconsistent. DTx are being evaluated and regulated under frameworks originally designed for traditional medical devices rather than software-based applications. “If you think about mental health chatbots, that is even more complicated and dynamic,” said Cohen. “These tools really need oversight and regulation, but this will be incredibly challenging to do.”



Vaile Wright, PhD, senior director of APA’s Office of Health Care Innovation, was recently invited to provide witness testimony to the House of Representatives about the application of AI in mental health care. “We are not at a point where these chatbots can operate completely independently,” she said. “We are asking for federal regulations for using AI therapeutically.”

With appropriate regulation and research, AI has the potential to provide better access to better treatments for millions of people, and pioneers in the field urge psychologists to take a leading role in both creating this technology and educating patients about it. “Many chatbots are optimized to drive engagement, which is different than the goal of keeping someone safe or improving mental health,” Cohen said. “Psychologists are mandated to put the patient’s best interests first, and they need to be involved in the development and regulation of these products to maximize the chances of providing responsive, precise care that improves mental health.”



Comments


bottom of page