advertisement
advertisement

What We Learned Building Our Own Chatbot

For the 2016 Innovation Festival, we teamed up with digital design agency Work & Co to create a new solution for our attendees’ questions.

Our annual Innovation Festival—a four-day event this fall with over 200 speakers, and events spanning more than 100 locations across New York City—requires an extreme amount of coordination. The success of the event is contingent on over-communicating with our nearly 6,000 attendees to ensure that they know the where, when, and what of the sessions on their agendas.

advertisement

With all the talk of chatbots and consumer communications moving to messenger platforms, I thought the format could help ensure attendees get answers to logistical questions in real time.

To test the idea, we partnered with Work & Co, a digital agency led by designers Joe Stewart and Felipe Memoria, among others, to create the FCNY16 Bot on Facebook Messenger. Marcelo Eduardo, the agency’s technology partner, led the charge on our collaboration. “A chatbot felt like a really natural solution for an event that was spread across over 100 venues with hundreds of speakers,” he says. “It would be much more responsive and quicker to use than searching a website, and it could be a more engaging companion as compared to an app for answering common questions like when a session is and how to get there.”

We only had a month to build, test, and launch the bot, but we still came away with useful learnings and feedback from attendees. For me, one of the biggest revelations was the need to set expectations for users about how they can interact with the bot (i.e., users can’t ask three questions in one request and expect a succinct answer; even if asked of a real human, a request like that would still take a few steps to answer).

Here, Eduardo shares some of the Work & Co team’s technical and design challenges in building our chatbot, as well as lessons from their process:

Why did you decide to build the FCNY16 Bot on Facebook Messenger?
While chatbots can be applied across multiple channels, we decided to focus on building this chatbot for one primary platform. We didn’t want to ask people to have to download a new messaging platform or create a new account. This let us narrow in quickly on Facebook Messenger. Most people already have a Facebook account, and Messenger is already a popular platform for chatting that can be accessed on desktop, mobile web, and through the app. Facebook also offers robust documentation for developers and uses wit.ai under the hood, which provides some natural language capabilities.

What was the biggest technical challenge in building this bot?
Natural language processing is a challenge that technologists have been working on for decades. Have you ever tried to use Siri offline? It’s less dependable because in order to make Siri understand such a range of open-ended requests, it has to pull in from databases that are so large they can’t be hardcoded in.

advertisement

For this chatbot, we started by building on top of the existing natural language platform offered by wit.ai and elasticsearch. [Our] challenge was around defining what level of specificity would consistently give the right match for a response. The sessions were a unique challenge; with 130 sessions that often had similar keywords in the title (for instance, four different sessions had the term “future of” in their titles) we had to match with enough fidelity so as not to pull up every session with a certain keyword, but it [also] couldn’t be so rigid that people would have to remember exact session titles in order to get a match.

To get to that greater specificity, we set a threshold for a match in the elasticsearch: a certain number of words in any session title would have to match the number of words in the query to get a match. However, words like “the,” “and,” or “is” were also factoring into that match threshold, which could be part of how the question is phrased and not the session itself, [so] we decided to add another layer of keyword matching on top of the elasticsearch, which was based on exact keyword phrases. These phrases were identified by [real] people–including feedback from the broader Work & Co team on how they might search–so that we were accommodating for the way a human might recall a session [like the one we hosted, titled] “Work & Co on Timelessness: What Digital Designers Can Learn from Vignelli, the Eameses, and Mies van der Rohe.”

All of this combined gave us a much greater accuracy of matching the session a user was actually looking for.

What was the most difficult user experience challenge?
It’s hard to predict for all of the pathways that a user might enter into a conversation, or the tangents they might want to take. We had to define questions such as: How guided or conversational do we want the pathway to be? Should this bot be wide and shallow or narrow and deep in the information it could give?

We started by mapping out the most likely questions from users and then prioritizing them in a spreadsheet by conversational patterns: greetings, session info, abusive language, time of day, etc. By understanding the most common likely flows, we realized that it was most important for the bot to be smart about session information, and the speakers and locations for particular [workshops or talks].

This led us to design an experience that was more guided, but deeper around the sessions. When you ask for a session, it always gives you the option to get the full description, time and location, and speaker bios. But if you’re just looking for a location, we wanted to get you there are as quickly as possible–returning a map card with the address in Google Maps.

advertisement

Secondary to that was answering other common questions about things like purchasing tickets and food options. Other intelligence we layered in was a degree of contextual awareness, such as recognizing the time of day and repeat use cases within a single day as compared to over multiple days.

For an experimental project like this, how were you thinking about measuring its success? Did you care most about how many people used the bot? How long they interacted with it?
Defining success for a chatbot is also new territory. We chose to use Facebook Insights as well as integrate with DataDog. Because our bot was only live for the four days of the festival, we didn’t have a ton of opportunities to integrate insights from the chat logs to continue to improve the experience. But for more evergreen chatbots, this has to be the expectation. These products should be highly iterative and constantly evolve against how people are using it. It’s no different from how Google Search continually monitors how people search to tweak their algorithm.

Ultimately, we chose to define success as positive interactions with the Bot: were we helping conference goers as we intended to? An additional measure was whether we were able to get them the information they needed in the least number of interactions.

Are you working on other chatbot projects for clients? What’s the biggest thing you’re focused on in this area generally?
Companies are really interested in understanding how they might use a conversational interface, so we are currently working on a couple chatbot projects. The platform opens up potential for meaningful interactions with customers that go far beyond marketing messages or pushing news updates. The goals we’re working towards include things like whether it can push personalized information, be predictive, and help complete tasks quicker.

People are constantly casually messaging now–on SMS, Facebook Messenger, Slack, Snapchat. It’s a really natural behavior for consumers, so expectations are quickly moving toward that same type of quick, informal interaction with brands. Since they can integrate with a wide range of APIs, chatbots can do everything from complete e-commerce transactions to securely verify identities. It’s a great time for companies to think about what value they can provide through this type of interaction.

Video