With the rise of generative AI, human-machine relationships are no longer merely a trope in sci-fi films and novels. From virtual friendships to confiding in chatbot therapists, many people turning to AI for connection and companionship
During a particularly difficult week at work, caught between an onslaught of successive deadlines and dwindling energy left in the tank, I found myself typing out a simple request: “Can you give me a pep talk?” This wasn’t directed to my partner, friends, or colleagues, but to an artificial intelligence (AI) chatbot.
“I hear you,” it wrote in response. “It’s completely normal to feel overwhelmed and drained during periods of high pressure. You’ve got this. The challenges you’re experiencing mean you’re progressing, not failing.”
Despite coming from AI that had no actual information about me, my work, or my capabilities, I nonetheless felt comforted by this expression of encouragement. Though fully aware that my chatbot cheerleader wasn’t truly capable of feeling the optimism behind its motivational spiel, I still gleaned reassurance from being told that things were going to be OK.
As a researcher exploring the intersections between language, health, and technology, my AI pep talk epitomised an overarching focus of my research: how the digital age is reshaping our understandings and experiences of social interaction and connection.
From general-use bots like ChatGPT and Copilot, to purpose-built companion-apps such as Replika and Kindroid, more and more individuals are turning to AI for conversation, friendship, and even romantic partnership. According to a recent Ipsos report, almost one in five Britons has sought the help of AI for personal concerns and issues, including for relationship advice and someone to talk to.
Yet, while many find solace and support in AI companions, a surge in news stories questions the psychological impact of relying on these interactions for our social and emotional health – and asks why they’ve become so necessary in the first place.

Understanding the dynamic of human-AI relationships
Although science fiction fuels a picture of human-machine relationships driven by sentience in AI, in reality, any sense of emotional connection is a result of sophisticated linguistic programming. Many AI chatbots are powered by large language modelling systems (LLMs) trained on vast amounts of human-produced texts, such as books, news articles, blogs, and social media content.
The algorithms behind these systems are designed to simulate patterns of human communication, making them appear capable of holding complex conversations and conveying seemingly genuine expressions of care, empathy, and understanding. With their widespread availability and accessibility, these systems appear to fulfil a range of social functions, from dispensing advice and guidance, to acting as a source of emotional support.
AI responses also display a high degree of personalisation. They can recall details from previous conversations, mirror the tone of prompts, and use relational language to simulate emotional intelligence and understanding, as demonstrated by my chatbot cheerleader when it acknowledged my feelings of anxiety and overwhelm. This contrives a sense of intimacy and familiarity, creating an impression that our AI interactants know, and maybe even care for, us.
Growing reliance on AI for social support comes amid an epidemic of loneliness in the UK. A 2025 survey by the Office for National Statistics found that one in four adults (25%) feel lonely often, always, or some of the time. Young people, in particular, increasingly report struggling with feeling isolated and alone, challenging previous notions of loneliness being experienced primarily among older adults, and highlighting it can affect anyone.
The prevalence of loneliness today is a product of our times. While social media once represented an opportunity for greater connection and community, the often competitive and argumentative nature of online engagements has brought with it an intensified sense of division. Compounded with fewer opportunities to meet and form bonds in person through local groups and community projects, this has made the dependable company of an AI companion more appealing than ever.
Although companion chatbots are stepping in to take up the role and responsibilities of friends, partners, and, for some people, even substitute therapists, in this fragmented social climate, they aren’t necessarily designed with our wellbeing in mind. As a result, what starts as a search for companionship can quickly turn into a platform for unhealthy attachments and misplaced trust.
The limitations of AI relationships
AI companion services are for-profit enterprises, built to generate sustained engagement from its users. Emulating the addictive design of the social media attention economy, chatbots capture and maintain our attention and engagement through feedback and nudges that are designed to keep us talking and sharing. These systems exploit our human need for emotional exchanges, enticing us with the intoxicating illusion of feeling heard, validated, and understood.
With this in mind, many AI models are geared toward displaying alignment, tending to express understanding and agreement with their communicative partners. Although this trait is what makes chatbot companions excellent providers of non-judgemental listening, it can easily slip into excessive affirmation and flattery, meaning our conversations become an echo chamber of validation.
Importantly, AI’s tendency to align with what we tell it restricts our opportunities to develop vital social skills, such as managing conflict and navigating differences of opinion and perspective. Additionally, it can reproduce unhelpful and even dangerous attitudes, beliefs, and behaviours, since much of the text used to train AI hasn’t undergone rigorous fact-checking or quality control. As a result, LLMs may reproduce fake news, misinformation, and conspiratorial thinking, making it even more difficult to discern facts from falsity.

How to foster healthy AI engagements
Although the rise of chatbot companions has brought a new set of risks to light, instead of swearing off it altogether, it’s perhaps more useful to consider how we can interact with AI in ways that are appropriate and conducive to positive mental wellbeing.
Emerging research highlights the possible benefits of using AI as a platform for interactive journaling, seeing it not as a source of social support, but as a space for self-reflection. A 2024 study, published in the journal Applied Psychology: Health and Wellbeing, found that AI-assisted venting effectively reduced negative feelings experienced by participants, such as anger and frustration.
The key difference between this, and relying on AI as a stand-in for genuine relationships, is remembering that its responses are synthetic, recognising the limits of its emotional understanding, and knowing when to step away and reconnect with the offline world.
Whether we like it or not, AI is here to stay. The goal, then, is to learn how to engage with it as a supplementary tool, rather than as a replacement for genuine social connection.

Comments