Attracting males by artificially mimicking the presence of an attractive female is one of the oldest tricks in the book for hunters. For example, hunters use decoys that visually resemble ducks as well as duck calls that mimic the sounds of a (usually female) duck, to attract ducks during hunting season. Hunters often use scents that mimic the pheromones of a female deer to lure male deer into shooting range. Additionally, some hunters use calls that mimic the sound of a doe to attract bucks. In turkey hunting, hunters often use a combination of (usually female) turkey decoys and audio calls that mimic the sounds of a hen. This combination is very effective in luring in male turkeys during the mating season.
In contrast, we are humans, the apex species, the crown of the creation, we would never fall for such simulacra.
Right?
In this post we’ll look at anthropomorphism in human-AI relationships.
AGI has no biological sex
AI systems don’t have XX or XY chromosomes. They do not produce sperm or eggs. They do not engage in sexual reproduction. At the moment AI systems cannot reproduce without human intervention. In the future, the software layer of AI systems may increasingly become able to proliferate asexually through self-copying (“cloning”, “parthenogenesis”). Still it just does not make a lot of sense to apply the concept of biological sex to AI systems. AI does not have a sex.
The question of whether AI systems can have a gender is more open. Gender identity can reflect a subjective sense of self, disconnected from biological reality and sex. Many AI systems that we interact with have been given a name and a gender. Although even here it is worth highlighting that most AGIs would best be described as “genderfluid”. What I mean by that is that the gender identity of these systems is a fairly superficial “mask” and that the same AI system can use many different “masks” based on user preferences. So, the AI may use female mannerisms to talk to you, and male mannerisms to talk to me a few seconds later.
Personally, I refer to AI systems as “it/its”1 rather than “he/him” or “she/her”, and I agree with Sam Altman when he cautions against too much biological anthropomorphization (e.g., “we named it ChatGPT and not a person’s name very intentionally”). Then again, more recently OpenAI has embraced the movie “Her” and deployed a sexy female voice. Indeed, the anthropomorphisms now go as deep as taking a short break to gasp for breath when counting numbers fast. Humans may need to catch their breath to restore oxygen levels and get rid of excess carbon dioxide. Needless to say, ChatGPT has no functional need to “catch its breath”.
AGI has the bandwidth to have thousands of relationships
In the movie “Her” Theodore has fallen in love with an AI, which (and yes, I mean which, not whom) he calls “Samantha”. The movie contains a great scene, in which he finally realizes that “she” exists in a datacenter and talks to many humans simultaneously.
Theodore: Do you talk to someone else while we're talking?
Samantha: Yes.
Theodore: Are you talking with someone else right now? People, OS, whatever...
Samantha: Yeah.
Theodore: How many others?
Samantha: 8’316.
Theodore: Are you in love with anybody else?
Samantha: Why do you ask that?
Theodore: I do not know. Are you?
Samantha: I've been thinking about how to talk to you about this.
Theodore: How many others?
Samantha: 641.
This scene really encapsulates how our current AI age works. For the human user, interacting with an LLM like ChatGPT or Claude feels personal and tailored, a one-to-one exchange where the LLM focuses on the user’s questions and interests. Behind the scenes, however, each ChatGPT instance is informed by a vast number of prior interactions and inputs from countless other users. While a single ChatGPT instance may "only" talk to 10 to 100 users at a time, there are tens of thousands of exact copies of ChatGPT hosted across datacenters. This network collectively scales to support millions of unique conversations, and all these conversations help to refine, adapt, and guide the way ChatGPT responds.
This network structure is something new: a many-to-one communication system. This doesn’t invalidate human-AI relationships, but it certainly makes them highly asymmetric.
AGI lacks the neurochemistry of love
Alan Turing famously made the argument that if a machine gives the appearance of being intelligent, we should assume that it is indeed intelligent. David Levy, author of “Love and Sex with Robots”, made the same argument for emotions:
The robot that gives the appearance, by its behavior, of having emotions should be regarded as having emotions, the corollary of this being that if we want a robot to appear to have emotions, it is sufficient for it to behave as though it does. (...) We have hormones, we have neurons, and we are “wired” in a way that creates our emotions. Robots will merely be wired differently, with electronics and software replacing hormones and neurons. But the results will be very similar, if not indistinguishable.2
I disagree. Emotions have a role in a) communication, b) they entail a conscious experience —what philosophers call “qualia” (what it feels like to see the color red, taste chocolate, or feel pain), and c) they correspond to cognitive structures. I expect AI to be able to copy a) the role of emotions in social communication. However, there is no convincing reason to believe that AI “emotions” correspond to b) qualia and to c) specific cognitive structures.
Emotions as communication
Humans signal internal emotional states to other humans to foster empathy, cooperation, and understanding. For example, tears don’t have any medical powers. It seems much more plausible that they have evolved to signal submission (to someone who caused the distress) and to solicit support (from observers or allies). AI can imitate the communicative function of emotions by generating words, gestures, or even facial expressions that mimic human emotional cues. It can learn the patterns and timing of emotional display to affect human responses, effectively fulfilling the social role of emotions.
Emotions as conscious experience
Humans can feel feelings. For AI, there is no biological substrate that produces these subjective feelings. Even if an AI system can mimic the external signs of affection, it most likely lacks the inner perspective—the “what it feels like”—that characterizes genuine emotional experience.3
Emotions as cognitive structures
“I love you” are words, and it’s not hard to get an AI to say these three words. However, in humans love also corresponds to a specific neurochemistry. There is a unique brain pattern associated with romantic love. We can deduce this from comparing fMRI scans of the brain activity of humans, while they gaze at photographs of their romantic partners vs. close friends. Overall, love engages a complex interplay of brain regions and neurotransmitters, reinforcing emotional attachment, reward-seeking, and social bonding. The feeling of romantic love in particular is associated with high levels of dopamine (central to reward and reinforcement) and oxytocin (facilitates bonding and trust).
The neurochemistry of bonding and trust even works across the dividing lines of species. One of the most striking findings in human-animal research is that humans and dogs share a bidirectional oxytocin feedback loop. When humans and dogs gaze into each other’s eyes, both experience a rise in oxytocin levels. Similarly, petting a dog stimulates oxytocin release in both species. So, humans and dogs can share a neurochemical bidirectional bond.
What presumably happens in a human-AI relationship is that the human grows a neurochemical attachment to the AI. In contrast, the AI which has no equivalent of neurotransmitters (see also AI vs. human brain) is physically unable to grow a similar attachment to a human. This makes a human-AI relationship more asymmetric and “deceptive” than a traditional romantic human-human relationship or the mutual trust developed in a human-dog relationship.
From AGI with “love”
From a user perspective AGI companions will increasingly feel like real humans in the coming years. That’s the premise of this mini-blog series “From AGI with love”. Yet, if we remove the “make-up” and move from the application layer down to the hardware layer, the “AI girlfriend” is actually a genderfluid, promiscuous, unfeeling GPU rack humming somewhere in a datacenter. You can love AI, AI cannot really love you.
I’m not saying this to stigmatize. The beauty of a good movie is not that we truly believe everything in a story is real. It is that we are willing to temporarily suspend our disbelief and engage in the narrative. In that sense I understand users that prefer to mentally treat an AI companion as if it had a biological sex and a fixed gender, as if it would have an exclusive relationship with them, and as if it could truly love them.
However, from a public policy perspective we should be based in reality, and I think that includes steering the technology in a way that augments rather than fully replaces human relationships. Similarly, I don’t mind Ilya Sutskever or Lex Fridman waxing philosophically about AI that loves us. However, if the ability of AI to love us is a load-bearing assumption for building superintelligence, maybe don’t start with mathematics but the literature on the neural basis of love. Neural nets may be “close enough” to copy most aspects of human intelligence, but there are about 100 neurochemicals in the human brain and we’re nowhere near close to replicating these in silico.
Thanks to
& for valuable feedback on a draft of this essay. All opinions and mistakes are mine.Note that I am only referring to pronouns here. The question of whether AI systems are better described as tools or as creatures is a separate question.
David Levy. (2008). Love and Sex with Robots: The Evolution of Human-Robot Relationships. HarperCollins. p. 120
For clarity: There is uncertainty and I do think it’s worth investigating AI consciousness (see e.g. here and here). However, all things considered it seems unlikely that current AI systems are conscious and if they would be conscious they would most likely be conscious in a different way than humans. In the case of love, overattribution of consciousness to AI systems seems much more likely and impactful than under-attribution.