WRITING

How to Design Emotionally Intelligent AI

Ariba Jahan | 4.13.18

In today’s world, we can roll over in the morning and ask Alexa to start brewing our coffee, give us the weather, order more coffee beans and play our Spotify playlist without ever touching a button. Sure, the setup is a bit more complex and not quite seamless yet, but we’re clearly moving away from interacting solely with screens and towards different forms of communication with artificial intelligence (AI).

At SXSW in 2013, Golden Krishna emphasized this screenless world by saying that “the best interface is actually no interface at all.” At this year’s SXSW Sophie Kleber (Executive Director of Product and Innovation at HUGE) took it even further,  when she said, “we are no longer in a terminal world.”

Instead of a terminal world, voice is becoming a more mainstream interface as we’ve gone from Apple’s Siri to Amazon’s Alexa, Google’s Home, Microsoft’s Cortana and Samsung’s Bixby – all of which can communicate back and learn about us from us. Sophie said, “the moment the machine starts talking, we assume a relationship with them.” We assume there will be empathy, support, and advice. This is natural for us to expect. We have already seen robots that are capable of understanding, computing and engaging with humans on an emotional level in movies such as I,Robot, Ex Machina and Big Hero 6.  With such a high level of expectation for emotional intelligence from the users, here’s what the AI experts believe is important to keep in mind as we design these new machines.

Machines should make us feel like we are flourishing.

During her talk on “Designing Emotionally Intelligent Machines”, Sophie pointed out that a machine should make us feel like we are “flourishing.” Martin Seligman, a leading positive psychologist defines flourish as the ability to be optimistic and view the past, present and future with a positive perspective and gives us a sense of satisfaction, pride and fulfillment when a goal is achieved. This means that the designer should think about:

Machines should be our sidekick.

Emma Coats, who was a storyboard artist at Pixar, implemented some of  Pixar’s 22 rules of storytelling as she created the Google Assistant personality. In a Wired interview, she mentioned that not all the rules could be applied because “You, the person interacting with it, are the hero. That’s why the Assistant can’t be opinionated: it’s there to be reliable, not to have depth.” The Assistant is intended to be an endearing, trusty sidekick.

Context is king.

We have already mastered making the internet perfect for transactional engagements, so we shouldn’t insert new or create more conversations when people are expecting a quick transaction. It’s important to map the user journey and identify when a conversation fits in, and depending on the context, how the machine should respond. Know whether it should:

Trust in the relationship.

For this relationship to grow and sustain itself, we (humans) must be able to trust that we are not being manipulated, violated or being taken advantage of by the machine.  What is too much? When is it unethical?  Javier Hernandez of MIT Media Lab said “Affective Computing (devices and systems can recognize, interpret, process and simulate human affects) is like nuclear power. We have to be responsible in defining how to use it.” Designers should:

Let it learn, but not too obviously.

As with all AI, the machine learning algorithm needs to keep getting smarter as it engages with us, but it shouldn’t draw attention to it. To accomplish this, Emma also focused on creating witty quips as part of the Google Assistant personality so it can deflect awkward questions from the user and keep the conversation going. This is a sneaky way to allow the machine learning algorithm to keep improving itself without drawing attention to the fact that it’s not actually answering a question.