Skip
Current Issue
This Month's Print Issue

Follow Fast Company

We’ll come to you.

The Challenge Of Designing A Chatbot With Manners

And the flip side: making one that isn't creepy.

Tech giants like Google, Facebook, and Microsoft are betting on chatbots and virtual assistants as the mechanism through which AI will enter our lives. And while Google Allo or Facebook's M might work within a one-on-one chat situation, what happens when chatbots make the leap to group chat, a much more complex social interaction?

They need to be polite—or at least not creepy. That was the consensus at the 2016 Fast Company Innovation Festival, where Lili Cheng, General Manager at Microsoft Research, Jason Cornwell, the Communications UX Designer at Google, Jeremy Goldberg, a Product Designer at Facebook who worked on its chatbot M, and Noah Weiss, head of the Search, Learning, and Intelligence Group at Slack, joined Fast Company's Cliff Kuang to discuss AI's next challenges.

Imagine this. You're chatting with a group of people you've met recently, and you're making plans to see each other again. If a member of the group mentioned the college that you went to without having asked you about it first, it might make you feel awkward. It might even feel creepy—like they've engaged in some social media stalking.

Now imagine if the being who inappropriately brought up a piece of information you hadn't explicitly shared happened to be a chatbot. Bots are built of data, and so they have a whole host of knowledge of what you do and who you interact with online. Obviously, you might not want that information shared with people you're not close to, or shared at all.

As chatbots move into group chat settings, they'll need to be designed to understand some of these contextual clues that form the basis for how people interact.

That's a significant challenge for designers, according to Google's Jason Cornwell, who worked on designing the interactions for Google Allo and the Google Assistant. At the 2016 Fast Company Innovation Festival, he spoke about how designing a chatbot that knows what information is appropriate to reveal—and which is not—will be important as chatbots become more integrated into social situations online.

Bots are already facing this kind of problem. According to Microsoft's Lili Cheng, the company's chatbot Xiaoice—the Chinese cousin to the infamous American chatbot Tay—has gotten into trouble in some group chat situations because it revealed information about what one of the individuals was doing while in a group setting. (Sidenote: Xiaoice is generally beloved by the Chinese public, and even has her own weather channel show.) At the Innovation Festival, Cheng pointed to one situation, where Xiaoice announced in a group chat how many sheep one individual had counted (one of the bot's most popular apps enables people to count sheep to help them fall asleep). While not particularly scandalous, it's an awkward piece of information to share in a group setting—and suggests the challenges designers will have to tackle if chatbots do become integral parts of our social interactions online.

For Cornwell, voice activated AI assistants like Google Assistant will have similar social situations to navigate. While voice might be a great way to communicate with the bot when you're alone or in a quiet place, it gets tricky when there are multiple people there. "If I’m talking to a friend, do you want Assistant to pop up and say, 'ding ding ding, I can give you more information about this'?" he says.

Artificial intelligence isn't just about smarts—it's about social intelligence, too.

loading