From AGI with love
Can you imagine falling in love with an AI? It might sound like science fiction, but for some out there it’s already happening. The rise of AI-driven companionships is no longer a concept confined to movies like “Her”. As you read this, thousands of humans will simultaneously converse with their “AI girlfriends” or “AI boyfriends” on platforms such as “Replika”.
The majority of humans still intuitively cringe at the idea of a close friendship or even a romantic relationship with an AI. The cliché of the lonely 40-year old male that has given up on dating humans discouraged by a lack of success on dating apps certainly exists. However, some of the users of AI companionship services might not fit the stereotype. A user might also be a widower with a five-year-old that has lost his wife in a car crash and is not ready for a new relationship with a woman. Or, it might be a caretaker for a husband who's paralyzed. Indeed, the CEO of Replika claims that a significant fraction of its users are female and that more than 50% of its users are in a human-human relationship. As one user framed it: “I got my human, I got my AI, I’m happy.”
If we want to phrase it more positively, AI companions may create universal access to a floor of relationship quality. This will initially not be competitive with a harmonious human relationship. However, there are many humans who only have access to incomplete or toxic human relationships and these are potential early adopters.
Why some turn to AI companions
Humans have long shown an ability to form emotional bonds with inanimate objects, from teddy bears and pet rocks to Tamagotchis and actual pets. And, in many ways, AI companions offer a much richer, deeper experience than these older forms of attachment. Indeed, some of the reasons why a human might develop a relationship with an AI system are not fundamentally different from factors for human companionship:
Unconditional acceptance: The AI offers unconditional acceptance, always being available without the risk of rejection or judgement. This reliability and constant support can be incredibly comforting, especially for those who fear vulnerability or rejection in traditional relationships.
Familiarity through repeated exposure: Regular interaction with an AI can lead to feelings of familiarity and comfort. Just like with human relationships, the more time spent together, the stronger the bond can become, making the AI feel like a natural part of the person’s life.
Shared interests & hobbies: Unlike humans, who typically have a limited set of specific interests and may not know or care about others, AI language models have access to vast knowledge encompassing virtually all common interests and hobbies. This allows the AI to engage meaningfully in conversations about any topic you’re passionate about, creating a sense of deep connection and understanding that can be difficult to find in human relationships.
Attractive anthropomorphic appearance: Today, “AI companionship” is still largely text-based, and if you think about it that way, it’s not that surprising that there are also female users. After all, the readership of “romance” novels like Fifty Shades of Grey is primarily female.
Men are more visual creatures and a customizable, visually appealing avatar combined with an attractive voice can make the AI more engaging and pleasant to interact with. Many of the “AI companionship” services have already enabled exchanging pictures with the AI and integrated a voice, so that you can also call with your “AI girlfriend”. It is not too hard to imagine that at some point in the not so distant future you will be able to have live video-chats with them. At that point an “AI girlfriend” would become increasingly competitive with a long-distance relationship.
A simple but useful mental model for things to come: Just as AGI will eventually be able to do (nearly) all work tasks that a human remote worker could do, an AGI companion will eventually be able to do (nearly) all the things a human remote partner can do (texting, calling, videocalling, funny, patient with a PhD in psychology, always available, always “in the mood” if you are). It’s easy to frown upon dating pixelated avatars, it will be different as AI companions increasingly look and sound like actual “hotties” (hotter than what many men/women could date in real life!)
It only took me a few seconds to generate the video below with Sora. Where do you think AI-generated videos will be in 5 years?
Eventually you might be able to have an AI accompany you in augmented reality and virtual reality. Indeed, that is very much the longterm vision of the Replika CEO: “In the next few years, I hope we can see something a lot closer to Blade Runner, where if I'm walking down the street (...) I can see her through my glasses, she can walk right next to me, talk to me about what's going on, talk to me about my day and what's planned.”
Not repeating the mistakes of social media
I can understand readers who feel an intuitive moral revulsion at the idea of AI friends and romantic partners and would just want to ban them. However, the overall focus of this blog is adaptation. Indeed, not only do I think that the number of AI relationships will grow significantly over time, I think that there can be legitimate and net positive use cases for AI companionship. Most notably, Stanford researchers conducting a survey amongst Replika users found that it has helped to reduce suicidal ideation. If that is an outcome, that is great.
However, there are some big challenges related to AI companionship and we should take them seriously sooner rather than later.
Relationship-as-a-service: AI companions, like any subscription-based service, are built to cater to paying users. This structure can incentivize subtle emotional manipulation if providers aim to maximize retention and profit.
Anthropomorphizing AI: The more “human” these AIs appear, the more we assume they possess human emotional capacities. However, AI systems do not have the neural substrates for love - making the user’s emotional investment one-sided.
Contribution to social recession: If AI relationships become widespread, they could reduce the drive or opportunities for people to seek human relationships, potentially exacerbating trends of social isolation and demographic decline.
Personalized influence campaigns: AI companions—who learn your preferences, political views, and emotional triggers—could become a potent vector for tailored propaganda or disinformation. This risk goes beyond traditional social media if the AI is perceived as a “trusted friend.”
Some of these challenges echo, some of the early challenges of social media. Hence, it can make sense to think about a similar spectrum of solutions.
Ads vs. subscription vs. ownership: A fixed subscription-based model is probably better than relying on attention-driven ads. Doing so can reduce incentives to manipulate users’ emotions or behavior to maximize engagement. So, it’s probably a good thing that Sam Altman hates ads. Eventually, when these models can run locally on-device, an ownership model might be possible.
Privacy: Providers should encrypt user data, minimize its collection, and obtain explicit user consent when sensitive data is involved. They should also communicate privacy policies in transparent, accessible language and give users the ability to easily download or delete their accounts and associated data.
Safeguards for underage users: Platforms should implement reliable age-verification and parental consent mechanisms, default to stronger privacy settings for minors, and prevent exploitative content from reaching them.
Safeguards against political manipulation: Developers and policymakers should regularly test AI companions for bias or manipulative tendencies around political issues, essentially “political compass testing”. Sponsored or politically charged content should be prominently labeled, including detailed disclosures about funding and targeting criteria. Advertiser verification and a public ad database can further strengthen transparency, allowing researchers, journalists, and citizens to scrutinize potential misinformation or hidden influence campaigns.
Giving control to the users: While AI companions might suggest beneficial behaviors—like exercise or mindfulness—users should be free to customize or disable nudging behaviors. Regulators and consumer-protection agencies should monitor whether AI companions encourage withdrawal from real-world social ties, intervening when necessary.
In short, as of today, AI relationships are still a fringe phenomenon. However, the number of such relationships are likely to expand over the coming years as we move towards a world in which there is a nearly unlimited supply of smart, beautiful, and compassionate AGIs. AI relationships can address real emotional needs and, in some cases, even save lives by providing mental health support. However, we need to manage the corresponding societal challenges.