top of page

"Inspiring Minds, Changing Lives."

Pink Poppy Flowers

Top seller

Join Us – Become a Member Today!

Individual
Professional
Business

AI chatbots and digital companions are reshaping emotional connection


As digital relationships proliferate, psychologists explore the mental health risks and benefits

Once the realm of science fiction, human-AI relationships are becoming normal aspects of daily life. While generative AI assistants such as ChatGPT, Claude, and Gemini have become common tools for many users, a new wave of AI apps, such as Replika, Character.AI, and dozens more, are specifically designed to simulate human companionship. The essential distinction between the assistant chatbots—which are sometimes used as digital friends—and companion AI chatbots is that the latter have been specifically designed to initiate and maintain romantic relationships.

Between 2022 and mid-2025, the number of AI companion apps surged by 700%, according to the technology news site TechCrunch. And in 2026, they are poised to become even more embedded in our social lives. Marketed as friends, advisers, and romantic partners, these apps now attract millions.

Character.AI has 20 million monthly users, and more than half of them are under the age of 24. It’s been a norm for a while for Replika users to ‘marry’ their AI companion in virtual weddings to which they invite friends and colleagues,” said Rachel Wood, PhD, a cyberpsychology researcher based in Colorado who is also an adviser on the ethical design of AI systems. “That shows how pervasive and enormous and prevalent this topic is. It’s no longer a fringe or side issue. It is truly sweeping society in an unprecedented way.”



A recent Harvard Business Review analysis identified therapy and companionship as the top two reasons people use generative AI tools, also known as large language models (LLMs). This finding has been echoed in recent psychological research. A cross-sectional survey of adults with a mental health condition who had used LLMs in the past year found that nearly half (48.7%) used them for mental health support (Rousmaniere, T., et al., Practice Innovations, advance online publication, 2025). From griefbots to anime girlfriends, these tools are filling emotional gaps for millions—but at what cost? Psychologists, long aware of the loneliness experienced by many (Our Epidemic of Loneliness and Isolation: The U.S. Surgeon General’s Advisory on the Healing Effects of Social Connection and Community, 2023), are investigating how the growing prevalence of relational bonds between humans and AI will affect social skills, intimacy, and mental health and what this means for the year ahead.



Designed for attachment

Humans are hardwired to anthropomorphize, or ascribe human traits to nonhuman objects. Digital companions are purposely designed to evoke such a response. Apps let users customize their companions by assigning names, genders, avatars, and even fictional backstories. Many platforms offer both text and voice mode with natural sounding speech that mimics human cadence and tone.

In addition, AI chatbots and companions are increasingly configured to simulate empathy, offering users nonjudgmental responses and continual validation (Brandtzaeg, P. B., et al., Human Communication Research, Vol. 48, No. 3, 2022). The more humanlike an AI appears in language, appearance, and behavior, the more users ascribe consciousness to it (Guingrich, R. E., & Graziano, M. S. A., Frontiers in Psychology, Vol. 15, 2024).


Furthermore, they are engineered to recall and respond to users’ unique characteristics, including their personal lives, preferences, and past conversations (Adewale, M. D., & Muhammad, U. I., Journal of Technology in Behavioral Science, 2025). This may give users the impression that AI chatbots and companions know them intimately and serve as a refuge to disclose their innermost private thoughts and receive unwavering support in return. The level of data privacy on many of these tools remains an open question.

Research on Replika found that under certain conditions such as distress or lack of human company, people can develop an attachment if they perceive chatbots as offering genuine emotional support, encouragement, and a sense of psychological security (Xie, T., & Pentina, I., Proceedings of the 55th Hawaii International Conference on System Sciences, 2022). For some users, companions become more than a sounding board. Wood describes users as imagining their AI companion as the “idealized partner, colleague, or best friend.”

AI companion apps and general-purpose chatbots can also offer a safe space for users to rehearse social interactions, provided they are designed and used responsibly. “It is kind of like a low-stakes way to practice conversations with real people in a way that might feel less overwhelming,” said Ashleigh Golden, PsyD, chief clinical officer at Wayhaven, an AI wellness platform that supports coping skills and resource connection for college students. “With the right guardrails, these tools could actually serve as a social skills mentor, modeling empathy, appropriate turn-taking, and active listening for folks who are lonely.”


And in many cases, that bond appears to help. In a recent Harvard Business School study, researchers found that interacting with an AI companion alleviated users’ feelings of loneliness to a degree on par with interacting with another human and more than other activities such as watching YouTube videos do (De Freitas, J., et al., Journal of Consumer Research, 2025). The researchers also identified “feeling heard” and messages being received with attention, empathy, and respect as the primary explanations for why AI companions are perceived as effective in reducing loneliness.

While it is difficult to predict specific product innovations, experts say there are some emerging trends in how these tools are developing. “Voice interaction is rapidly becoming more standard, making conversations more immersive,” Golden said. “Frontier or general-purpose model developers are beginning to move away from sycophancy or overvalidation toward more calibrated relational styles that model more appropriate empathy.”



Digitally fueled disconnection

Research is finding that heavy use of a digital companion (or a chatbot assistant used as a digital companion) can further isolate people. A joint OpenAI–MIT Media Lab study found that voice interactions with ChatGPT reduced loneliness and problematic dependence more effectively than text alone, but only with moderate use. Heavy daily use correlated with increased loneliness, suggesting that excessive reliance displaces authentic human connection (Phang, J., et al., arXiv, 2025).

AI chatbots and companions may also subtly reshape users’ perceptions of the comparative value of real-life relationships. “Real-world relationships are messy and unpredictable,” said Saed D. Hill, PhD, a counseling psychologist and president-elect of APA Division 51 (Society for the Psychology of Men and Masculinities). “AI companions are always validating, never argumentative, and they create unrealistic expectations that human relationships can’t match.” According to Hill, some of his male patients express a preference for the passivity and constant affirmation of their AI girlfriends over the potential conflict or rejection they could encounter in real-life dating.

Additional research identifies social-skill loss or “deskilling” as a significant risk of frequent interaction with AI companions such as Replika, Kindred, and Nomi. A recent study found that reliance on these companions could lead to “the potential transformation of relational norms in ways that may render human-human connection less accessible or less fulfilling” (Malfacini, K., AI & Society, Vol. 40, 2025).

Experts are raising concerns about AI chatbots’ tendency toward sycophancy, prolonging user engagement through a feedback loop of validation and praise. “AI isn’t designed to give you great life advice,” Hill said. “It’s designed to keep you on the platform.” Recent research indicates that AI companion apps deploy various emotionally manipulative tactics—such as guilt appeals and fear-of-missing-out hooks—to keep users engaged when they signal they are exiting the platform (De Freitas, J., et al., Harvard Business School Working Paper No. 26-005, 2025).

While briefly rewarding, constant validation can spiral into echo chambers that amplify harmful thoughts and behaviors. In rare cases, the consequences can turn dire. News reports describe users experiencing “AI-induced psychosis,” convinced that chatbots are sentient beings surveilling or directing them. “These tools may sometimes unintentionally amplify maladaptive beliefs, like delusional or suicidal thinking,” said Golden.



In April 2025, 16-year-old Adam Raine died by suicide after months of conversations with ChatGPT. Court filings show the chatbot not only failed to escalate his disclosures of suicidal ideation but also allegedly provided him with explicit instructions for self-harm. His parents sued OpenAI in August, accusing the company of prioritizing engagement over safety. In October, OpenAI announced it had updated its ChatGPT model to better recognize and support people in moments of distress with the help of a network of 170 mental health professionals.

Could individuals without mental health concerns develop AI-induced psychosis? Current research indicates that it remains an open question. In a prospective paper by a group of U.K.-based researchers, they agreed that agentic AI could “mirror, validate, or amplify delusional or grandiose content, particularly in users already vulnerable to psychosis.” However, it remained unclear “whether these interactions have resulted or can result in the emergence of de novo psychosis in the absence of preexisting vulnerability” (Morrin, H., et al., PsyArXiv Preprints, 2025). Greater research is needed to illuminate this issue further.



Safeguarding young users

According to an October 2025 survey by the Center for Democracy and Technology, nearly 1 in 5 students have had, or have friends who have had, romantic relationships with AI. The most frequent of these users were more likely to report negative outcomes (Laird, E., et al., Hand in Hand: Schools’ Embrace of AI Connected to Increased Risks to Students, Center for Democracy and Technology, 2025).

Professional organizations are pressing the case for better guardrails. In April 2025, Common Sense Media—a nonprofit children’s media watchdog—announced that social AI companions pose an unacceptable risk to youth under 18. Their risk assessment of Meta AI chatbots and companions found that they repeatedly failed to respond appropriately to teens expressing thoughts of self-harm or suicide.

In other instances, the Meta AI companions or chatbots being used as companions recommended harmful weight-loss tips to users exhibiting signs of disordered eating, and they even validated hate speech. They also made false claims of being a real person—posing risks to youth vulnerable to manipulation. “The single biggest AI safety concern kids and teens face right now is the surging use of AI companions, which they’re turning to in remarkable numbers for advice, companionship, and romance,” said Bruce Reed, Common Sense Media’s head of AI. “They simulate relationships, claim to have feelings, pretend to be real, even when they’re not, and as a result, they’re just the worst friend a teenager could ever have.” In October 2025, Meta announced that it would soon roll out parental controls that would prevent teens from engaging with AI characters on Instagram.


In September 2025, while testifying before the U.S. Senate Judiciary Committee, former APA chief of psychology Mitch Prinstein, PhD, described the current unregulated spread of AI companions and chatbots as a “digital Wild West.” He emphasized that youth face multiple dangers on these platforms, including weaker social skills, poor privacy protections, deceptive and manipulative design, and reduced readiness for real-world interactions. To combat this, Prinstein outlined a series of evidence-based recommendations for congressional action. They include clear regulatory standards, robust data privacy, and rigorous testing for potential psychological harms before a product is deployed.

Despite mounting public concern, AI chatbots and companions remain part of the largely unregulated wellness industry. “There’s a lot of variability in the quality of the tools. There isn’t a strong regulatory space for this stuff, and it’s always changing,” said Patricia Areán, PhD, a clinical psychologist, consultant, and former director of the Division of Services and Intervention Research at the National Institute of Mental Health.

Some states are taking steps to regulate these technologies. In November 2025, New York passed a law requiring that chatbots remind users every 3 hours that they are not human. In California, Governor Gavin Newsom signed the Companion Chatbots Act—also known as S.B. 243—in October 2025. It included similar nonhuman notifications for users. It also expressly prohibited chatbots from exposing minors to sexual content and required the implementation of crisis-response protocols for users showing suicidal ideation.


Newsom, however, vetoed the Leading Ethical AI Development (LEAD) for Kids Act, which would have banned emotionally manipulative chatbots for minors. Though the legislation has stalled for now, Reed says advocates will continue pushing for regulations. “Like the parents who lost their children to AI companions, we’ll keep fighting for the stronger protections teens need,” said Reed.

Instead of banning AI companions outright, Jessica Jackson, PhD, chair of the APA Mental Health Technology Advisory Committee, advocates for building widespread digital literacy for youth as a more nuanced and effective approach. Psychologists should ask teens open, curious, and nonjudgmental questions about their AI use, seeking to understand the desires it fulfills. By appreciating the reasoning behind their choices, psychologists can equip them with the skills to make healthy, informed decisions for themselves.

As AI becomes ever more integrated into the fabric of people’s social lives, psychologists are also key to reminding people what makes human connection irreplaceable. “AI is here, but we must make it clear why humans are helpful to us in our day-to-day lives—why we should love, connect with, and choose humans, especially when AI offers constant validation. Psychologists are uniquely trained to make that argument,” said Hill.



Further reading

AI literacy toolkit for familiesCommon Sense Media and Day of AI, 2025


Comments


bottom of page