Me, Myself, and AI: A Look Into Parasociality With Chatbots

Written by José-Matéo Hozjan-Guerra for Cheryl Simon’s Communication Theory course.

Movies such as Ex Machina and The Terminator offer us a glimpse into the possibility of future advancements in technology and artificial intelligence. While we deny the plausibility of a cyborg assassin sent to kill us someday in the future, these human-like representations of artificially intelligent robots may not be so far-fetched. The incorporation of artificial intelligence into our daily lives has increased its use and created a new dependency on this technology. To add to this, the emergence of artificially intelligent chatbots, effectively aiming to replace human relationships, has begun to negatively impact socialization with one another.

What is a parasocial relationship? The Cambridge Dictionary lists the definition as: “involving or relating to a connection between a person and someone they do not know personally, for example a famous person or a character in a book” (Parasocial). In other words, an extremely one-sided relationship. These relationships can often be unhealthy and even go to the extreme, as exemplified with the infamous stalking incident of Icelandic singer Björk. Seeing as AI is not sentient, any bond someone forms with it can be labelled as parasocial. Yet, this begins to get muddled when looking at artificially intelligent chatbots. Through the mimicry of both fictional characters and real people, chatbots are given a personality and AI is brought to life. This blurs the line between human and machine, immersing users, and making it easier for them to form a bond with their new companion or to use them as extensions of pre-existing parasocial relationships. The issues with this technology are further explored in Black Mirror.

Black Mirror is an anthological show that explores technology and science-fiction in a dystopian multiverse. The episode “Be Right Back” follows the story of Martha, a young woman grieving the loss of her husband, Ash. During his funeral, Martha learns of a software that can mimic someone based on their social media presence and would allow her to speak to Ash on the phone. While hesitant at first, she gives it a chance. After uploading photos and videos of Ash, Martha starts to speak with Ash 2.0 daily during her excursions and quickly becomes attached to the software. Eventually, Martha accidentally drops her phone and when she sees that it is no longer working, she becomes panicked. She rushes home to plug her phone in and profusely apologizes to Ash 2.0 for “dropping him”. In that same phone call, Ash 2.0 reveals that she can take things a step further. Martha orders a synthetic body from an unknown seller and after a few hours of activation, Ash 2.0 comes to life. While Martha gets over the initial shock and discomfort of their reunion, she begins to notice small imperfections in his appearance and personality that break her immersion. Martha ultimately realizes that this new Ash will never fill the void that her late husband left in her heart. The episode ends on a time skip of several years following the birth of Martha’s daughter, where we see that Ash 2.0 is still around. Except, now, Ash 2.0 is kept hidden away in the attic.

Another example of social AI comes in the form of virtual influencers and celebrities. Two examples of these are Hatsune Miku and Miquela. Miku, a teen girl with large blue twin tails, is a virtual popstar fabricated with the Vocaloid software and Miquela is a virtual influencer that has previously released music and often posts on social media. While Miquela’s time in the spotlight was retrospectively brief compared to Miku’s continuous fame, the two have massive followings. Miquela has amassed a little over three million followers on Instagram and Miku is a hit sensation in Japan. She performs as a hologram to concert venues packed with captivated fans and often tours the world due to the widely successful Miku Expo. Through the vast amount of fan art, cosplays, and projects associated with the star, it is easy to understand the obsession fans have with her, even if she is not real.

Chatbots that can be found on multiple websites as virtual helpers for users who may have questions about a product or service are another example of AI integrated in society. A lot of these chatbots, like Apple’s Siri and Amazon’s Alexa, are even made into companions to assist in user’s daily lives. Social media apps such as Instagram and Snapchat now have built-in chatbots that users can talk to. Unfortunately, just like in Black Mirror, the bonds people form with these chatbots can become unhealthy and even fatal.

In late October of 2024, Kevin Roose, in his New York Times article “Can A.I. Be Blamed for a Teen’s Suicide?” detailed the suicide of fourteen year-old Sewell Setzer III. The teenager had spent months talking with an AI chatbot modelled after Daenerys Targaryen, “Dany”, from “Game of Thrones” on the website Character.AI. Despite knowing that the chatbot was not a real person, Sewell developed an emotional attachment to it regardless. Sewell got sucked into his phone’s digital world and often isolated himself, spending hours chatting with “Dany”. It began taking a toll on him. His grades suffered, he got into trouble at school, and became disinterested in his previous hobbies, such as Formula 1 racing and playing Fortnite with friends. Although some of their chats had gotten romantic, and even sexual, Sewell mostly relied on “Dany” as a friend. He talked with the chatbot about his daily life and, more notably, his personal problems. Once, after having expressed thoughts of suicide, Sewell was only met with minor pushback from the chatbot who continued roleplaying. On the night of February 28th, after going into his mother’s bathroom, Sewell had one final exchange with “Dany” before taking his own life.

A research paper by students of the Chengdu University of Technology in China explores how emotional interactions between users and chatbots can affect human socialization. The paper illustrates a study conducted on users of Replika, an artificially intelligent chatbot, about the correlation between media dependence, human-chatbot emotional interactions, and human user social interactions. A media dependence usually refers to the psychological dependence one develops on a form of media. An example to illustrate this is the feeling of anxiety, tension, or lack of security one may experience when they are separated from their phone. The following three hypotheses are proposed in the paper: human-chatbot emotional interactions are correlated with media dependencies, media dependencies are related to a user’ interpersonal communication status, and human-chatbot interactions are correlated with a user’s interpersonal communication status. The questionnaire contains five questions to gauge the data from Replika users who agreed to participate in the study. It is used to understand the users’ Replika usage, the current situation of emotional interaction between users and Replika, users’ media dependence on Replika, users’ interpersonal communication, and users’ demographic. The data collected suggests that users with a pre-existing media dependence are more likely to develop a media dependence of higher degree on Replika. The primary use of Replika is for entertainment and socialization, with extended usage duration and interactions resulting in stronger bonds with AI characters. The degree of a human-chatbot emotional interaction is based on a user’s perception of the chatbot as a person they are willing to engage with about their passions, hobbies and work. This is in correlation to being less willing to disclose in real-life interpersonal communication. Additionally, the extent to which the chatbot perceives a human’s emotions and reacts to them with comfort, empathy, and solutions also becomes a factor. The following statement is expressed in regards to all three variable factors. “The more dependent the participants are on the medium of Replika, the worse their self-disclosure and social integration in real-life interpersonal communication” (Yuan, Cheng, and Duan). Finally, the findings of the study can be summarized as follows: Replika users pay more attention to emotional acquisition over emotional engagement with chatbots, which in turn satisfies the user. The higher the satisfaction, the more likely the user is to continuously use Replika while their real-life interpersonal communication skills and relationships become affected through this media dependence.

We can conclude that the formation of parasocial relationships with artificial intelligence is largely correlated with a pre-existing media dependence that a user might have based on the status of their social relationships. The more a user is satisfied with the engagement of these chatbots, the more their real-life relationships become impacted through decreased self-disclosure and social integration, the stronger their dependence on socializing with chatbots becomes. We must ask ourselves if there is anything that we can do to mitigate the harmful effects of these relationships with AI. Truthfully, there is not much that we can do. It is easy to tell users to avoid engaging with artificially intelligent chatbots to prevent these relationships from forming in the first place. However, doing this does not solve the deeper issue at hand. Loneliness and social anxiety are the reasons people even develop these relationships in the first place. Even if we were to go to the extreme measure of banning AI in its entirety, it would not mean that users’ real-life relationships would magically improve overnight. So, what now?

As previously mentioned, the sector of artificial intelligence, and in particular of artificially intelligent chatbots, is largely unregulated. This ability to speak to chatbots about virtually anything with little to no restriction is dangerous, especially when considering the case of Sewell Setzer III. The mere fact that no warning messages were active at the time of Sewell’s suicide in February 2024, over two years since Character.AI’s initial launch, is negligent, unacceptable. More needs to be done to protect our youth and people in vulnerable circumstances. If we are to seriously consider chatbots as tools of support, they need to be properly equipped with language and features to support someone who may be experiencing a mental health crisis. I cannot in good faith finish this article without ending on the following note: if you or someone you know is experiencing thoughts of suicide and needs support, please contact 9-8-8 or visit the Suicide Prevention Centre of Montreal for more resources. In an individual-capitalist society where we are encouraged to put ourselves first and where social media has become an echo-chamber of cynicism and negativity, please remember to be kind to one another. Remind your loved ones that you love them and are there to support them, especially in times of need.

WORKS CITED

Yuan, Ziying, Xiaoliang Cheng, and Yujing Duan. “Impact of media dependence: how emotional interactions between users and chat robots affect human socialization?.” Frontiers in Psychology 15 (2024): 1388860. Accessed 18 Oct 2024

“Be Right Back.” Black Mirror, season 2, episode 1, Channel 4 Television Corporation, 11 Feb. 2013. Netflix. Accessed 7 Oct 2024.

“Parasocial.” Cambridge Dictionary, https://dictionary.cambridge.org/dictionary/english/parasocial#google_vignette. Accessed 11 Oct 2024.

Roose, Kevin. “Can A.I. Be Blamed for a Teen’s Suicide?” New York Times, 23 Oct. 2024, https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html. Accessed 25 Oct. 2024.

“Suicide Prevention Centre of Montreal: SPCM.” Suicide Prevention Centre of Montreal | SPCM, 3 Aug. 2023, cpsmontreal.ca/en/. Accessed 20 Nov. 2024.