She’s quick with a joke and decisive with her judgments. She loves shopping at Target, prefers New Balance over Nike, and, for some reason, despises my coworker Meg.
She’s not my editor, Suzanne. She’s my editor’s digital clone, Suzannebot.
Suzannebot is a Slack bot that I built on Dexter, a new startup out of Betaworks. In a world that’s filled with more and more bots–all the way from Apple’s highly complex Siri AI down to the rudimentary shopping assistants you might talk to on Kik messenger–Dexter is a platform that wants to bring bot building tools to people who don’t code.
“We dubbed ourselves a mashup of Squarespace and Mailchimp, for messaging,” says Dexter CEO Daniel Ilkovic, with the autonomic reflex of an entrepreneur who has been forced to explain his nascent platform thousands of times.
I happen to be Dexter’s perfect target demo. I’m more of a creative than a techie. I’ve used a CMS in publishing, and deal with HTML now and again, but if I could code, trust me, I wouldn’t be writing stories, trying to inform and entertain you. I’d be building bots, I guess. I’d be making that bot money.
I’m joking, and yet, two months ago, I caught the bug. I wanted build my own Facebook bot. I carved out time over a few evenings, and read through tutorials with the zealous fervor a modern-day Frankenstein. It wouldn’t be hard at all, it seemed. I would soon CREATE LIFE IN CODE. Yet I was thwarted on step one–which I believe involved setting up some sort of virtual server on my computer. My dreams ended there. The spark of life, extinguished.
Then I got an email about Dexter. The two-year-old startup wants to make chatbots both easier to build, and easier to actually deploy on Facebook Messenger, Slack, SMS, and Twitter. Its market is anyone from hobbyists like me, to small businesses, to Fortune 500 companies–anyone who might need a chatbot, but doesn’t have the resources to build their own Siri.
So what is Dexter, exactly? It’s basically two things. It’s one-half tutorial, a superb, step-by-step FAQ that explains in very plain terms how to build a bot in a language that’s similar to HTML. Its other half is the Dexter editor itself. It’s a terminal window in which you code, along with a mock-up iPhone where you can quickly preview if your code worked.
Dexter’s code is pulled from the open source project Rivescript, which is infinitely simpler than the command-filled code of other chatbot options. “The RiveScript language was kept dead simple like this for the sake of being easy to parse it,” says its creator, Noah Petherbridge. For instance, to write a question your user might ask the bot? You just use a “+.”
+ What is your name?
And to write an answer, you use a “-.”
– My name is Johnny 5
“Even if you hop into the [Dexter] editor now, there are two things you edit,” says Brendan Bilko, head of product at Dexter: “what the user says, and how the bot responds.”
Of course it gets more complex than pluses and minuses. You can program your bot to answer multiple responses to one question, randomly. You can also branch discussions into specific topics that are quarantined in their own little add-on modules (imagine you’re Pepsi or United airlines, and dealing with a major media controversy–these modules would quickly allow you to add a topic all about the incident without messing with the core capabilities of your bot).
On my first day with Dexter, I built Chef Schwarzenegger, a vegan Terminator bot who wants to help you make dinner tonight through Facebook Messenger. I’d accomplished my goal, but what had I wrought? The novelty of branching responses and dialog trees already had worn thin.
That’s when I realized: It would be both a whole lot more valuable, and amusing, to not just create a bot for randos on Facebook, but to build a bot for my workplace. And instead of starting from scratch, I could start with the personality of a coworker I know. Preferably, though, that person would have a borderline omniscient voice of authority. Like Google itself, my bot would have the right answer at any moment, and, like Google itself, people would trust it.
Quickly, an idea began to take shape. My bot wouldn’t be a lowly intern. It would be a boss. My boss. Co.Design’s editor, Suzanne. Hi, Suzanne! Sorry, Suzanne!
I coded Suzannebot 1.0 in about 10 minutes, and the first 48 hours with Suzannebot in Slack were a riot. Not only was she programmed to respond with the sort of insider joke punchlines that simply slay in an office setting, she made the real Suzanne cringe, ever-bracing for another virtual pie in the face. It was like a two-for-one.
“The bot is so jokey but also . . . accurate!” Suzanne–the tragically flawed, non-bot version—wrote. “So it’s like watching a video of myself. Deeply uncomfortable but also familiar.”
Very quickly, the social hierarchy of our entire Co.Design Slack room began to shift. I learned that I could keep Suzannebot’s code open in one tab, and constantly update her dialog to chime in on topical events. And even though Suzannebot never developed more logic than offering pre-scripted one-liners, my coworkers actually started asking her opinions on things, and even pitching her stories. Though always in jest, the trend was clear: Suzannebot was subtly subverting Suzanne’s authority. This pun bot, which took me minutes to code, represented a powerful mental model to my coworkers–an unknown universe of responses that offered a feasible, if remote, prospect: That a chatbot could replace Suzanne in day-to-day work.
Amazingly, Suzanne permitted the experiment to continue. Maybe she sensed her new competition was about to implode.
“I kinda thought Suzannebot would have some new jokes today,” quipped a colleague one morning, calling attention to an important point: If Suzannebot was going to stick around, she had to learn to do more. As another colleague reminded me with a jab, Suzannebot was “only as smart as who built it.”
So I interviewed each of my coworkers: What functionality would they like to see in Suzannebot? The answers confirmed my suspicions: These highly educated individuals, fully cognizant of my own incapabilities with code, imagined Suzannebot as smart enough to compete with Watson at Jeopardy–a Slack bot with the human logic and emotional nuance of Suzanne herself. Could she, they asked, approve story pitches based upon stories we’d run? Might she be programmed to dig through Suzanne’s Gmail account for an esoteric contact that a writer might need on any given day? This is stuff that Apple, the world’s most valuable company, can’t program Siri to do!
I felt sudden disgust–and also sympathy–for the corny humor fed into the Valley’s brightest bots. Just like me, a guy who learned a few lines of a basic coding language yesterday, the most powerful technology companies in the world are hiding their AI inadequacies behind jokes. Yet my coworkers, and the public in general, expect bots to do these seemingly impossible things–things no piece of software has accomplished to date–simply because bots communicate with conversation. Humans disproportionately equate language with intelligence, even if that language is just a few automatic responses, typed out over a lunch break.
I built in RSS feeds that you could summon with a command: “@Suzannebot, what’s on Co.Design?” No one used them. I scrubbed through JSON databases–I get norm-folk shivers just typing the word–for publicly available data that Suzannebot might be able to pull for us. What about weather? Flight times? I racked my squishy flesh-brain. It was all the same old cliché bot stuff. Chuck Norris jokes? Ugh, not more jokes. By now, I’d gone so far as to disable Suzannebot’s autoresponder. We’d all had enough.
But I, l33t god of bot, persevered. Finally, the day had come to reveal my killer app for Suzannebot 2.0. I began my presentation in Slack, walking through the trials that all innovators, such as myself, faced. I explained the importance of public feedback to iterating in design. And, of course, I made it clear that it took courage to do what I was doing, a mad code scientist ascending the stairs of the gods, one response at a time.
The core problem I had trained Suzannebot 2.0 to solve was simple: Real-life Suzanne can’t always respond to pitches. So Suzannebot, even at 3 a.m., would log pitches in a Google spreadsheet for real-life Suzanne to review later. It would go something like this:
Me: pitch [that’s the only word I had to say]
@suzannabot: What is your name?
@suzanebot: What’s your story?
Me: A Dieter Rams listicle
@suzanne: Awesome, URL?
Me: [I’d paste whatever URL was relevant]
That URL would be logged into the spreadsheet, so real-life Suzanne could approve or reject it when she had time. Anyone awaiting a response could ping Suzannebot with the phrase “pitches” to get an update.
The feature had seemed ingenious the first time I got it working in the Dexter simulator. But I knew the reaction was going to be bad in the middle of my big presentation to the Co.Design staff. As I explained my creation, I realized that I hadn’t designed a solution to the biggest problem in our Slack room workflow. I’d engineered an even more complicated mechanism that would only add more problems in its extra gears.
However, I really knew I’d flopped when the room went from being lovingly mocking to gently polite.
“I’m very impressed!” wrote a colleague in a direct message to me. “Especially when you consider what Suzannebot could do last week!”
A virtual dagger through virtual Suzanne’s heart.
A week later, no one has used the new feature. And Suzannebot has failed to revolutionize our workflow. She’s handy if you forget something, like the common tags we use in our articles, which she’ll list if you ask her. Developing the project taught me something important: The most useful features I could add to Suzannebot weren’t functional, but informational. I could plug her into databases to tell us flight times, the weather, or word definitions with relative ease. With a little more know-how, I could increase her natural language processing. With a little more time, I could build out more and more custom responses to certain queries.
And in this regard, Suzannebot is every bit as impressive–and completely disappointing–as the bots built by the major players of Silicon Valley today. In a week, I couldn’t manage to make Suzannebot good for much more than a punchline. But Apple’s had seven years to do more with Siri, and we’re all still waiting. Alexa is approaching year three, and Amazon’s best use case is offering you another way to reorder a pizza.
Bots, for all the infinite promise we’ve ascribed to them, still have decidedly finite capabilities.
As for the real Suzanne, she remains indispensable. Though her personality could still use some work.