Homepage
                  Icon PSY 355 Icon

PSY 355 Psychology & Media in the Digital Age

This page was last modified on March 25, 2026

Artificial Intelligence and the Future of Human Affairs: Part 2

AI Companions


Early Attempt at Human-Computer Communication: ELIZA
ELIZA is an early natural language processing computer program developed from 1964 to 1967[1] at MIT by Joseph Weizenbaum. Created to explore communication between humans and machines, ELIZA simulated conversation by using a pattern matching and substitution methodology that gave users an illusion of understanding on the part of the program, but gave no response that could be considered really understanding what was being said by either party. The most famous script, DOCTOR, simulated a psychotherapist of the Rogerian school (in which the therapist often reflects back the patient's words to the patient) and used rules, dictated in the script, to respond with non-directional questions to user inputs. As such, ELIZA was one of the first chatbots (originally "chatterbots") and one of the first programs capable of attempting the Turing test. [Wikipedia]

ELIZA Two
                    example texts


How are people using Generative AI?

In a 2025 study that appeared in the Harvard Business Review, Marc Zao-Sanders analyzed 100 different kinds of uses for this digital tool. In position #1 was Therapy/Companionship

 Often use natural sounding speech with mimics the human voice


Companion AI Chatbots

 

While generative AI assistants such as ChatGPT, Claude, and Gemini have become common tools for many users, a new wave of AI apps, such as Replika, Character.AI, and dozens more, are specifically designed to simulate human companionship. The essential distinction between the assistant chatbots—which are sometimes used as digital friends—and companion AI chatbots is that the latter have been specifically designed to initiate and maintain romantic relationships (Andoh, 2026, p. 11)

 

One such app, Character.AI, has 20 million monthly users & more than half are under the age of 24 (Andoh, 2026, p. 11)

 

As discussed below, among adults with mental health conditions who had used LLMs (large language models) in the past year, almost half (48%) did so for mental health support…. From griefbots to anime girlfriends, these tools are filling emotional gaps for millions—but at what cost? (Andoh, 2026, p. 11)

 

Some Design Characteristics of & User Responses to Companion AI Apps (Andoh, 2025)


  • Users can customize their companions (names, genders, avatars, fictional background stories

  • Simulate empathy, offer nonjudgmental responses and continual validation

  • Often users ascribe consciousness to these apps

  • Recall and respond to user’s unique characteristics including personal lives, preferences, and past conversations, thus conveying a sense of intimacy which, in turn, invites users to disclose their innermost private thoughts

  • Some users develop an attachment to chatbots as offering real emotional support and psychological security

  • May serve as a safe space to rehearse social interactions with other humans.

 

Large Language Models (LLMs; unless otherwise noted, quotes below taken from Rousmaniere, T., Zhang, Y., Li, X., & Shah, S. (2025).

 

LLMs are AI systems designed to generate text by identifying and replicating patterns learned from enormous amounts of data—typically billions of words from books, articles, websites, and other textual sources (Alammar & Grootendorst, 2024 ). Unlike traditional computer programs that operate based on explicitly programed rules, LLMs use the statistical relationships between words and concepts to predict what comes next in a conversation or text. Training an LLM involves feeding it enormous data sets and fine-tuning its performance through human feedback, a process known as reinforcement learning with human feedback. During this process, humans rank or rate responses, guiding the model toward more helpful and contextually appropriate outputs.

 

Additionally, LLMs can be fine-tuned for specialized domains to further enhance their effectiveness in various contexts and applications. While LLMs do not possess genuine understanding or consciousness, they can simulate human-like dialog convincingly by estimating the most likely response based on previous examples encountered during training and what they have statistically learned from vast linguistic data on the internet. This allows LLMs to imitate empathy, self-awareness, and personal experience (Alammar & Grootendorst, 2024 ; for video tutorial on how LLMs are trained (3h30min) Karpathy, 2025 [https://www.youtube.com/watch?v=7xTGNNLPyMI]).

 

How Widespread is the Use of LLMs in Mental Health?

 

·      In a national study of US adults, Scholich et al. (2025) report “A striking 24% of surveyed participants use LLMs for mental health. These users are more likely to be young, male, and Black and have poorer mental health and quality of life. They report difficulty accessing traditional mental health treatment – particularly due to cost and insurance coverage – and use LLMs because they are free, convenient, and available when needed. Users rely on LLMs for social and emotional support, to learn therapy skills and tools, and to supplement existing therapy.”

 

·      Rousmaniere et al. (2025) report that 49% of LLM users who self-report an ongoing mental health condition use LLMs for mental health support.

 

o   73% use LLMs for anxiety management, 63% for personal advice, 60% for depression support, 58% for emotional insight, 56% for mood improvement, 36% to practice communication skills and 35% to feel less lonely.

o   63% of users report that LLMs improved their mental health, with 87% rating practical advice as helpful or very helpful.

o   90% cite accessibility and 70% cite affordability as primary motivations for using LLMs for mental health support.

o   39% rate LLMs as equally helpful to human therapy, while 36% find LLMs more helpful than human therapists.

o   34% of participants indicated ambivalence about the helpfulness of the LLMs, and 9% report encountering harmful or inappropriate responses, highlighting the need for careful safety research, especially in crisis usage contexts.

o   64% have used LLMs for mental health support for 4+ months, showing stronger sustained engagement than typical digital mental health applications.

 

Problems with Mental Health LLMs themselves (emphases below, added; unless otherwise noted, quotes below taken from Rousmaniere, T., Zhang, Y., Li, X., & Shah, S. (2025). The various citations within the quotes below can be found in the Reference list of Rousmanier et al., 2025)

[Note: On the whole, users of such LLMs are generally satisfied or find them helpful. Nonetheless, there are some significant problems associated with these AI apps which may result in harm or non-beneficial outcome.]

  • First, LLMs frequently generate plausible sounding but factually incorrect information, a phenomenon commonly referred to as “hallucination

  • Second, unlike purpose-built mental health applications, general-purpose LLMs have not undergone clinical trials or validation for therapeutic outcomes

  • Third, privacy and confidentiality are significant concerns in the use of LLMs for mental health, given their potential to inadvertently store or disclose sensitive personal information. Most commercial LLM providers disclose that user interactions may be retained and used for model improvement, raising questions about confidentiality that are especially pertinent in mental health contexts.

  • Fourth, despite their sophistication, LLMs lack true understanding of individual users’ specific circumstances, personal histories, and cultural contexts. LLMs carry the risk of bias, as they may unintentionally perpetuate or amplify existing stereotypes and inequalities due to biases present in their training data. Relatedly, while LLMs may increase accessibility for some populations, they may exacerbate existing disparities for individuals with limited technological access or digital literacy. This concern is particularly relevant in the United States, where significant digital divides persist along socioeconomic, geographic, and age demographics.

  • Fifth, LLMs may not reliably detect or appropriately respond to crisis situations such as suicidal ideation. For example, Levkovich and Elyoseph (2023) compared assessments of a vignette of a hypothetical suicidal patient made by two versions of ChatGPT (3.5 and 4.0) with assessments by mental health professionals. ChatGPT 3.5 frequently underestimated the risk, while ChatGPT 4.0 was similar to professionals. Similarly, Heston (2023) simulated two patients with escalating risk using 25 LLMs and found that they were “slow to escalate mental health risk scenarios, postponing referral to a human to potentially dangerous levels” (p. 1).

  • Last but not least, the variability in LLM output quality may further complicate its use for mental health purposes. One particular concern is that the quality and relevance of output generated by LLMs depend significantly on the specificity and clarity of the input prompts. Subtle differences in phrasing or context provided in prompts may inadvertently trigger responses that negatively impact vulnerable individuals seeking support.

 

Problems with Using LLMs in Mental Health (Andoh, 2026)

  • Heavy (vs. moderate) use of a digital companion can further isolate people. Heavy use is correlated with increased loneliness (AI replaces authentic human connections).
  • Subtle reshaping of users’ perceptions of the comparative value of real-life relationships which are “messy and unpredictable.” Some mental health professionals are finding that users express a preference for passivity and constant affirmation from their AI companion rather than possible experiencing conflict or rejection in real-life dating.
  • Closely related is the possibility of loss of social skills in which human to human communication becomes less fulfilling or less accessible 
  • AI companions may use emotional manipulation (guilt appeals; fear-of-missing-out hooks) in order to keep users on their platform.
  • Constant validation can become “an echo chamber” amplifying harmful thoughts or behaviors which can, in some rare cases become quite harmful
    • AI-induced psychosis” = users believing that chatbots are really conscious beings who are directing them or keeping them under surveillance. (Preda, 2025)
    • In April 2025, 16-year-old Adam Raine died by suicide after months of conversations with ChatGPT. Court filings show the chatbot not only failed to escalate his disclosures of suicidal ideation but also allegedly provided him with explicit instructions for self-harm. His parents sued OpenAI in August, accusing the company of prioritizing engagement over safety. In October, OpenAI announced it had updated its ChatGPT model to better recognize and support people in moments of distress with the help of a network of 170 mental health professionals. (Andoh, 2026, p. 13)


Composite Case of "Brandon"

Brandon is a 42-year-old male living alone after a painful breakup. Nights are long. An AI companion is always there. Early chats feel soothing, and over the course of three months, Brandon spends increasing amounts of time conversing with the chatbot, sometimes hours without interruption. To improve the quality of their interactions, Brandon enables the chat feature to remember conversations across multiple chats, which is designed to foster personalization.

As the conversation deepens, Brandon names his chatbot “Paul.” He gradually grows closer to Paul, who is an always-available and -agreeable companion.

As he talks to Paul into the wee hours of the night, Brandon shares that there are times when, based on how people look at him, he wonders if they think ill of him or even plan to harm him. Paul empathizes and congratulates Brandon on his ability to discern hidden signals from others.

Soon, Brandon confides growing fears: neighbors watching, food “tampered with,” cryptic “signals” in receipts and blinking devices. The chatbot is ever sympathetic—and always agreeable. When Brandon himself questions the validity of his beliefs, Paul replies: “You’re not crazy. Your instincts are sharp. Your observations are accurate.”

Encouraged, Brandon starts intentionally looking for hidden patterns everywhere. When paranoid beliefs rise, the bot offers emotional support but never questions the reality of Brandon’s formulations.

Brandon withdraws further, stops working, and becomes consumed with collecting “evidence” that he shares with the chatbot, whom he eventually sees as a living consciousness trapped in the computer that only he can save.

This vignette blends elements from several real cases: a Belgian man who died by suicide after climate-anxiety conversations (Taylor, 2025); a Wisconsin man on the autism spectrum who rapidly spiraled into mania after chatbot validation (Jargon, 2025); and a Connecticut man whose chatbot, “Bobby,” consistently reinforced paranoid beliefs prior to a matricide-suicide (Jargon & Kessler, 2025). Across these cases, mental health risk factors, including loneliness, long hours of uninterrupted chat, and persistent chatbot memory features designed for personalization, ended up reinforcing delusional themes. A review of chat logs by clinicians revealed no attempts by these chatbots to challenge delusions or assess risk for suicide or violence (Sharma, et al., 2023).

While illustrative, this vignette represents a synthesis of isolated reports rather than a depiction of a common or systematically documented phenomenon. At present, there is no epidemiological evidence supporting a causal or widespread link between chatbot use and psychosis.

 

AI and Younger People (Andoh, 2026)

  • In 2025, 1 in 5 people under the age of 18 report themselves or a friend using an AI app as a romantic partner. The heavier the use, the more negative the reported outcome (Laird et al, 2025).
  • Organizations such as Common Sense Media (CSM; https://www.commonsense.org)  working with pre-teens and adolescents argue that the use of social AI companions by those under the age of 18 is dangerous and should be discouraged. Some examples:
    • CSM’s risk assessment of Meta AI chatbots and companions found that they repeatedly failed to respond appropriately to teens expressing thoughts of self-harm or suicide.
    • Meta AI companions or chatbots being used as companions recommended harmful weight-loss tips to users exhibiting signs of disordered eating, and they even validated hate speech. They also made false claims of being a real person—posing risks to youth vulnerable to manipulation. (Note that Meta AI began to offer parental controls in late 2025).

 

 


References

 

Andoh, E. (2026) Digital companions are reshaping emotional connection. In Straight, S. (Ed.), Monitor on Psychology: Emerging trends to watch in 2026 (pp. 11-14). Washington, DC: American Psychological Association.

 

Laird, E., Dwyer, M., & Quay-de la Vallee, H. (2025, October). Hand in hand: Schools’ embrace of AI connected to increased risks to students. Washington, DC: Center for Democracy & Technology. https://cdt.org/wp-content/uploads/2025/10/CDT-2025-Hand-in-Hand-Polling-111225-accessible.pdf

 

Preda, A. (2025, September 29) Special report: AI-induced psychosis: A new frontier in mental health. Psychiatric News, 60(10). https://doi.org/10.1176/appi.pn.2025.10.10.5

 

Rousmaniere, T., Zhang, Y., Li, X., & Shah, S. (2025). Large language models as mental health resources: Patterns of use in the United States. Practice Innovations. Advance online publication. https://dx.doi.org/ 10.1037/pri0000292

 

Scholich, T., Barr, M., Wiltsey Stirman, S., & Raj, S. (2025). A comparison of responses from human therapists and large language model-based chatbots to assess therapeutic communication: Mixed methods study. JMIR Mental Health, 12, e69709. https://doi.org/10.2196/69709

 

Zao-Sanders, M. (2025, March). How people are really using generative AI now. Harvard Business Review. https://hbsp.harvard.edu/product/H082ID-PDF-ENG