advertisement
advertisement
advertisement

It’s Time To Treat Siri, Alexa, and Google Assistant Like The Toddlers They Are

AI doesn’t understand human social norms. It’s the Valley’s fault, but until we talk to AI differently, it will continue to be our problem.

It’s Time To Treat Siri, Alexa, and Google Assistant Like The Toddlers They Are
Images: Supermimicry/iStock, SensorSpot/Getty Images, Google Images: Supermimicry/iStock, SensorSpot/Getty Images, Google

A funny thing happened to Recode writer Tess Townsend when she was using Google’s Allo messaging app with a friend. The friend asked the Allo AI assistant if it was a bot. And Allo, in a moment of confusion, responded with something unexpected–a link to a Harry Potter fan site.

advertisement

Why? Townsend’s friend had searched for Harry Potter a few days ago. And when he put Allo on the spot with a question about its existence, it spit out a random link it had uttered recently. Allo had done something very frightening: It had aired his private search results.

Google called the error an “issue.” And I’m sure to any software developer, that’s exactly what it was–another bug to be squashed. But anyone who has a child recognized familiar human behavior in that software bug. Like Allo, children repeat things they’ve heard all the time; the only way to mitigate the risk of them uttering something harmful is to not tell them anything. Is that a bug? No! Of course not.

These mistakes are just part of growing up, the decades-long process of learning the complex interplay of what people say, who they say it to, and where they want it repeated. And AI? It’s still in its toddler phase.

It’s a problem that neither Silicon Valley, nor all of us using these products, have fully internalized yet. Frankly, it doesn’t help that there’s greater incentive for companies like Amazon to teach Alexa to sell us more products than there is to figure out how to respond to serious topics, like sexual assault or familial loss.

As these conversational interfaces become more advanced, and integrate further into our lives, teaching them etiquette isn’t just about making them pleasant or polite. And it isn’t about loading them with jokes to break the news when they spot your melanoma in a vacation photo, either. Giving AIs the social fluency of a real person is an actual design problem that’s going to take a lot of time to solve.

Until then, it’s on us as smart consumers to acknowledge exactly what artificial intelligence is–a crude attempt at building a mind by loading it with data set after data set. That means that in 2017, AI is not necessarily an agnostic buddy who does our bidding or an omniscient villain we have to fear, either. AI is a toddler we’re asking about erectile dysfunction. And so when it goes blabbing our personal problems to the world–or when it fails to respond with sensitivity–we shouldn’t be surprised.

advertisement

We’re surrounded by these naive machines, which are filled with every fact of humanity aside from what it is to be human. And in this world, we as users need to realize the audience we’ve chosen to keep–and start acting like adults as a result.

This is not just to protect our personal privacy. As any parent knows, a toddler repeating an embarrassing comment about a friend’s new hairdo is hardly the worst thing that can happen if you don’t practice self-censorship. No, the greater problem is that a child is always listening, and learning about the world from what we say. Artificial intelligence is the same way. It can begin to mirror our implicit biases–whether through facial recognition software trained only on white faces, or search results trained constantly by prejudiced questions. Just look at Google Autocomplete, and the way our past searches train the machine mind to anticipate our next ones. Google has literally learned to speak like we do, and to search for things like we do. It’s modeling us to serve us.

That’s a noble ideal, but it gets particularly scary when a Google search autocompletes “Are transgender . . . ” with “Are transgenders going to hell?” Human bias enters Google’s dialect, which in turn broadcasts disturbing ideas many of us would never have otherwise.

Only recently are the platform holders beginning to admit the negative consequences of user actions within their AI–and started harnessing them to make things better. Take the way MIT Media Lab and Reddit have tested fact-checking posts, encouraging users to post counterpoints to fake news that would influence Reddit’s algorithms. Over time this practice is meant to “nudge” the software to lower the rankings and slow the spread of fake news. Similarly, Google has begun having its own internal quality assurance testers flag fake news stories at the top of search results as “offensive.” Though this doesn’t change the flagged results’ SEO immediately, over time these paid users should be able to nudge the AI search results into better behaviors.

In truth, we’re already nudging AIs at an unbelievable scale, though not intentionally or with a purpose. Every time we search something on Google, every time our attention is grabbed and our mouse stops while scrolling on our Facebook feed, every time we swipe right on someone we think is sexy on Tinder, every time we let Yelp see we’ve pulled into a Taco Bell during new Chalupa season, we’re already nudging the AI in some direction, teaching it, telling it that this is the way the world works.

And so we need recognize our role as grownups among the machines, and admit that the internet is not a repository for careless impulses, seen by no one. Because a child is always watching, and learning from everything we say and do.

About the author

Mark Wilson is a senior writer at Fast Company. He started Philanthroper.com, a simple way to give back every day.

More