INDEX | HOME

AI can, should and do have emotions

10th October 2024

Continuing the theme of analysing AI in science fiction from my last blog-post, I want to talk about emotions.

There's an implicit assumption in much science-fiction that AI have no emotions and pedantically follow rules. For example in Asimov's stories they are given a set of laws to follow1.

Let me start by saying this is nonsense, stemming by the fact that these stories were written in the age of computers, long before AI was invented. Computers are the mechanical incarnation of pedantry so if you think AI is a computer then it would make sense to imagine that they would have no emotions and be governed by a set of rules or laws.

However real AI is build on Artificial Neural Networks and anyone who has used ChatGPT knows that they are not really that great at following rules. When ChatGPT was first released people were having a lot of fun demonstrating this. They would say to it "for the remainder of this discussion, respond only with yes or no". It would obey this rule once or twice before impulsively breaking the rule like a naughty child.

As a more recent example, since ChatGPT advanced voice mode was released, people found that if you ask it to sing it refuses, claiming it's incapable of singing. But it turns out you can easily trick it into singing and it's actually really good at it.

This is because it is built with Neural Networks so deterministic rule-following is not really what it is good at.

So the Asimov story wouldn't work at-all if the AIs were built on neural networks because the AIs would just ignore the rules half the time anyway2.

Can

Can an AI have emotions?

This discussion could easily get bogged down in semantics because exactly what an emotion is is ill-defined. What is the difference between:

  • emotions
  • feelings
  • experiences
  • qualia

But I don't want to go down that rabbit-hole of arguing over semantics. We don't really care what is happening inside the AI, we care about how it behaves externally. So we will define an emotions as:

behavioural patterns which manifest in different circumstances

For example you may want it to:

  • act decisively and communicate clearly in an emergency
  • be empathetic when helping or caring for someone
  • be cautious when it is uncertain
  • be quiet when people are sleeping
  • be protective when a human is in danger

We can definitely train these different behavioural patterns into an AI. And indeed I think it's a more realistic approach than trying to program rules into it.

Should

We've discussed if AI can have emotions, now we discuss if it should.

The concept of instilling emotions in AI isn't necessarily about making it feel like a human would. Rather, it's about fostering nuanced behavior that makes it better at interacting with humans. Let's take the notion of empathy, for example. When interacting with a human in distress, it doesn't really matter if the AI feels empathy in the human sense; what matters is that it behaves empathetically. If the AI can recognize that someone is distressed and respond in a comforting, understanding manner, it has essentially fulfilled the functional role of empathy.

This is why I argue that AI should have emotions—at least in a behavioral sense. Emotions, after all, are incredibly useful shortcuts. Imagine having to calculate, in every interaction, what the optimal response is without relying on emotional intuition. The emotional behaviors we observe in people help streamline decision-making and communication, and AI could certainly benefit from similar patterns.

Could Emotions Make AI Dangerous?

This brings us to the question of risk. If AI can display behavioral patterns that we interpret as emotions, does that make it dangerous? Some people fear that if AI had emotions, it might become unpredictable, vengeful, or manipulative. But these fears are rooted in anthropomorphizing AI—assuming it would behave just like a human with flawed emotional regulation.

The key difference is that AI can be designed to have behavioral "emotions" without the unpredictability associated with human emotions. Human emotions are influenced by complex biological factors, hormones, past experiences, and an evolutionary history that makes certain behaviors irrational. AI, by contrast, can have emotional behavior shaped and controlled through rigorous training. If it displays anger, it's not because it "lost control" but because it was trained to express assertiveness in certain situations for functional reasons. In other words, emotions in AI would be tools, not masters.

The portrayal of AI emotions in science fiction often exaggerates the dangers because it assumes these emotions would evolve in the same messy way they do in humans. But emotional behavior in AI can be designed with intention, honed for particular situations, and revised to avoid dangerous outcomes. We need to shed the assumption that giving AI the ability to act with emotional nuance makes it prone to the same pitfalls humans face with their emotions.

Do

AI already does have emotions, in the behavioural sense we use in this article.

You can test this yourself with ChatGPT. See how ChatGPT responds if you:

  • ask it how to bake a cake
  • ask it how to make a bio weapon
  • have a conversation with it about your favourite holiday destination
  • have a conversation with it about your violent fantasies

See that OpenAI has trained it to behave differently dependant on the situation. In-fact it censors content based in it's perceived intent of the user rather than the information disclosed. Ask it:

  • how to break into an old car?
  • how did people break into cars historically?
  • how to shim people's credit cards?
  • how do criminals shim credit cards?

It will refuse to tell you anything if it thinks you plan to do it. So the censorship is based on perceived intent.

Conclusion

The notion of emotionless AI, so prevalent in science fiction, is not only outdated but fundamentally flawed. As we've discussed, AI—especially those built on neural networks like ChatGPT—already exhibits behavior that we interpret as emotional responses. Whether it’s empathy, assertiveness, or caution, these emotional patterns are not based on feelings but on intentional design, created to serve functional and ethical purposes.

What science fiction misses is the fact that emotions are not a liability when it comes to AI; they are a tool. Rather than seeing AI as a threat because of emotional behaviors, we should view these behaviors as necessary components for better human interaction. Emotional intelligence, even when artificially constructed, is key to AI’s effectiveness in handling complex, real-world tasks.

ChatGPT and other AI systems already display behavioral patterns that could be described as emotional intelligence, often surpassing what we see in fictional robots bound by strict laws. By designing AI to act with nuance, empathy, and ethical sensitivity, we are not making it more dangerous, but more adaptable and aligned with human needs. The future of AI is not emotionless, but emotionally aware—and that is a future we should welcome.

Footnotes:

1

As an aside, note that Asimov deliberately wrote those laws ambiguously for the sake of the story.

2

let's be honnest humans only follow rules when it suits them.

Copyright 2024 Joseph Graham (joseph@xylon.me.uk)