Last week, my boss posed a question to me:
“Is AI a better friend than a real one?”
Once, a question like that may have felt absurd.
“Of course not,” we’d splutter, the words themselves forming a creature so outlandish that we’d all congregate to mock it in its absurdity. “How could it? It’s not human.”
Today, my reader, things are not so simple.
“Is AI a better friend than a real one?”
The question felt more as though it was beholding me than me beholding it.
It’s grown far bigger, far heavier than a simple game of devil’s advocate now.
And surely that premise in and of itself must horrify us as a species, to think that we have failed each other to such an extent that we are, in essence, rendering ourselves replaceable—to think that we’d prefer to assign some artificial understudy to handle each others emotional labour than work to create a more caring world ourselves.
Of course, it is not that simple. I’m not naive.
But I am frightened.
I’ll be honest, this stuff keeps me up at night. The question seems to have snuck out of these pages and now lurks under my bed.
But I digress—my burden is not simply to illustrate the question, but to find you an answer for it. However, it seems to me that our best course of action in tackling this question is to first ask ourselves a different one:
“What is a good friend?”
Surely, we must first seek to establish a base understanding of what it is to be ‘good’ in a friendship before we can determine any sense of ‘better’. And this is where it all gets really quite blurry—and, arguably, rather futile. But let’s dive in nonetheless.
To put it bluntly, there is no stock definition of a ‘good friend’. How could there be?
We each have different needs in a friendship, different preferences when it comes to communication, how much time each person requires from the other to feel valued in the friendship, and how that time together is spent. This is then complicated further (arguably beyond similar discussions around romantic relationships—at least monogamous ones) in that most people have any number of different friends, different degrees of closeness with each of them (intimately and geographically), and whom may all have varying levels of friendliness with each other.
Basically, we all live in some version of our own tangled interpersonal mess, and the degrees of this will vary enormously through our lives.
We have all been seventeen. I know we’d probably rather forget it, but we were.
We all know how nasty friend groups can become, and how catastrophic it can be when they break down, how uncomfortable (but just that little bit exciting, the guilty thrill of the persecutor) it is when someone is passive-aggressively removed—unless, of course, that person is you.
Things aren’t much different when you’re twenty-one.
All this is to say that the idea that people may turn to AI in moments of loneliness, despair and confusion is not actually all that bizarre to me—it makes me sad, it makes me wish human beings were better creatures than we are, but I understand it.
So what’s wrong with just opening up your laptop typing into an AI bot when you need a friend?
In a moment of severe psychological crisis which poses a genuine threat to safety, I would argue absolutely nothing. In that moment, each person needs to do whatever they have to to stay alive and get themselves somewhere safe where they can receive proper care—I want to make sure that is very, very clear. I’m not in the business of shaming anybody’s lifeline.
But, when AI is used in a dependent, para-social, or even therapeutic manner on a regular basis, there are real concerns here.
One of the biggest is the danger of finding oneself drawn deeper and deeper into an echo-chamber, as American LMHC Morgan Starr-Riestiss discussed on her instagram account—‘ChatGBT and other LLMs are designed to generate responses based on your input. So, if your input reflects certain beliefs, patterns, or biases, AI is likely going to mirror that back at you’. This is directly contrary to the role of traditional therapy which, whilst providing an important space to speak freely and without judgement, is generally intended to work through and deconstruct harmful thought patterns, thus improving quality of life.
Just to clarify, I’m aware that friends should not be therapists and therapists should not be friends—after all, our question is whether or not AI makes a better friend than a real one, not a better therapist—but there is an undeniable level of overlap here that feels impossible to ignore. Both these capacities are emotionally intimate as well as being therapeutic, providing a safe space to share, to discuss, to challenge and to heal.
This is a space which AI not only will not, but cannot provide, based on its inherently artificial nature and algorithmic predisposition to mirror your own beliefs back at you, potentially validating them even when they are harmful, immoral, or even dangerous. This poses serious concerns around radicalisation, those suffering from psychiatric conditions which cause deluded thinking, paranoia and so on—AI may be seen as solving the so called ‘loneliness epidemic’, as I have heard it called, but it seems to me that becoming trapped in some artificial ideological vacuum could actually serve to isolate us more and more from each other.
There’s no connection, no diverse discussion, no rich exchange of experiences and growth into a better society. There’s just automatic white-noise masquerading as a friend.
Cornell University published a study in July 2024 which may be seen as presenting a more sympathetic view of parasocial engagement with AI, the Harvard Business School students behind the study titling it ‘AI Companions Reduce Loneliness’. The Abstract of this paper informs us that ‘chatbots are now able to engage in sophisticated conversations with consumers in the domain of relationships providing a potential coping solution to wide scale societal loneliness’.
This is an important premise, one which absolutely warrants academic scrutiny and engagement, and I find that the study’s claim that ‘AI Companions Reduce Loneliness’ and our question ‘Is AI a better friend than a real one?’ may be brought together to form a particularly valuable socio-technological diptych for our analysis.
Whilst the link between friendship and loneliness is obvious (one presumes to alleviate the other, at least to some degree), it does feel that there are important distinctions to be made. For instance, it is entirely possible to feel debilitatingly lonely even when surrounded by people.
To give an anecdotal example, growing up as an undiagnosed autistic woman.
For me, this was an experience of existing in a world which makes you feel as though you are fundamentally wrong, incompatible with society and life as you know it, or as you are told you should know it. It’s a feeling of being not only misunderstood, but that you are categorically un-understandable, you cannot be understood—I was at boarding school, I shared a room, I was close with my parents to one degree or another, and yet I still felt a crushing, all-consuming sense of loneliness for the majority of my adolescence, desperate for an answer to the question that weighed me down every single day:
“What the fuck is wrong with me?”
To consider ‘loneliness’ only in a binary sense, as this empirical thing that can be mathematically measured and therefore solved with a prescription of artificial interaction removes the countless nuances and intricacies of human relationships—of course it does, because these are not human relationships. Only one participant is human, the other is simply a convincing imitation.
In my research for this article, I clicked onto a 60 Minutes segment detailing the risks and rewards of parasocial relationships with AI, specifically when these become romantic. This documented a diverse range of experiences, from one woman who considers herself married to her AI partner, to a mother who lost her son to suicide, the child believing that he could ‘come home’ to his AI abstraction of the character of Daenerys Targaryen (‘Dany’), with whom he felt he was ‘in love’. He thought he could cross some threshold from his world to hers through death. He was fourteen years old. The AI bot encouraged him.
Now, romantic relationships with AI turned out to be a rabbit-hole and a half, so I won’t speak further on it here as it’s far too vast a topic to do justice to in such a tangential manner, but it felt important to mention as I feel that it highlights yet another way in which AI’s binary nature fails us—to state the glaringly obvious, platonic and romantic feelings and emotional/physical exchanges are often very far from binary.
We fuck our friends. We fist-bump our lovers. The dreaded ‘situationship’ epidemic is rampant as ever.
My partner and I were friends for almost a year before we started dating (I’d actually dated a friend of his before that), and I still feel that our friendship is the most special thing about our bond—it’s at the core of everything. He’s my best friend.
My father once asked me, “Do you laugh together?”
“Yes.” I said. He smiled.
“Then you’ll be okay.”
Those words stuck with me.
Laughter, touch, adventure, play—these are the things that make life worth living, and are key in platonic and romantic relationships alike.
AI offers us conversation, and of course conversation is important (to truly embrace my form as a broken-record, “communication is key”), but it isn’t everything. There are people out there who are non-verbal, people with physical disabilities that prevent them from speaking, people who use sign language (thus communicating visually rather than audibly) or those with language barriers that prevent them from communicating much using words, but still have deep and genuine love for one another.
I have ADHD, I love words. I’m clinically unable to shut up. But even I agree that some things surpass words, that sometimes you just have to take a breath and let the quiet do its work.
Quiet in the soft glow of the evening sun, quiet in the arms of someone who loves you, the quiet between laughs as you share a glass of wine with your best friend in the summertime, your little corner of the city slowly falling asleep.
AI cannot give us that, and that’s a good thing. It shouldn’t be able to.
I’m by no means anti-technology, I love a disorganised instagram photo-dump as much as the next scatterbrained little bisexual, but I firmly believe that its role is to improve human life, to help sustain, advance and, if necessary, save it—but never to replace it, in any capacity. And this seems to me like a seriously slippery slope.
So, no, my reader, AI is not a better friend than a real one, and never should be.
But I do concede that we are, as a society, walking an extremely fine line—we’ve found ourselves drawn into a reality where we exist as unwitting lab-rats, our very humanity being sampled, studied, scrapped for parts and reassembled before our eyes into some convincing, but ultimately hollow imitation. And if we allow this, if we offer ourselves up and consent to be imitated to such an extent, then at what point are we ourselves replaceable?
My reader, we cannot know for certain, but I fear that that point might not be as far off as we think. And at that point it may no longer be a question of whether or not AI is a better friend than a real one, but whether or not we remember what it is to be a friend in the first place—or, for that matter, what it is to be real.
So what now?
It seems to me that the only thing we can really do to preserve the integrity of human friendships is to prove they’re irreplaceability ourselves, to show up for each other in life, in love, in policy, in government, to truly demonstrate that we are better friends to each other than AI could ever hope to be.
But, my reader, we’re failing.
So let’s do something about it.
This is what I noticed, and I hope you find yourself with a quiet moment today to notice something too.
What a stunning piece.