The biggest challenge with AI may be designing it.
That’s the implication of a study designed to last until 2116, called the "One Hundred Year Study on Artificial Intelligence." The Stanford-led project aims to report on the state of AI in our world every five years for the next century, as reported by a panel of two dozen experts—currently ranging from Julia Hirschberg, a pioneer of natural language processing, to Astro Teller, leader of Google’s "moonshot" division.
The first report, published online yesterday, reads a bit like a half-drawn map, a mix of observations, questions, and even warnings. The consensus? First of all, "the panel found no cause for concern that AI is an imminent threat to humankind," which, phew. Second of all, the decisions we make over the next 15 years will shape our relationship with AI for centuries. And many of those decisions will concern the design of the interfaces and interactions that will establish our trust—or mistrust—in AI. Here are three of the biggest takeaways for designers who work with AI.
Here is a truth: All of us will watch AI fail, over and over, in the next few decades. Some of those failures will be small, and others may be very large. The Stanford panel points out that it’s up to designers to create interfaces that explain to users why a product or machine screwed up.
We’re already seeing this warning play out through self-driving cars. For example, this summer Tesla faced skepticism about its Autopilot feature, which allowed drivers to cede control of their vehicles to the software. After a crash involving the feature killed one driver, some critics argued that the feature had been introduced too quickly, and without enough information—resulting in many users who didn't understand why the system didn’t work the way they expected it to. As Cliff Kuang recently wrote on Co.Design, "the Silicon Valley mind-set of just dropping beta tests upon an unsuspecting populace might be not only naive, but also counterproductive. After all, our first impressions always color our willingness to try again."
If users are frustrated by an app or object that draws on AI, the report concludes, they'll be less likely to use it again. "Design strategies that enhance the ability of humans to understand AI systems and decisions (such as explicitly explaining those decisions), and to participate in their use, may help build trust and prevent drastic failures," the panel writes. So it's critical that engineers and designers create systems that communicate freely about how they work.
At the same time, machines that are too friendly represent a hazard for humans. As the report observes, anthropomorphism is everywhere in technology these days. Chatbots. Devices that respond to you conversationally. Even robotics that have "human" faces and expressions.
Human features have an amazing amount of power over us as users. This month, a study from roboticists at the University College London compared how people reacted to two different robots. One was unemotional but competent. The other was extremely expressive, with an emotional face and voice, but was terrible at its job. It turned out that people interacting with both 'bots were extremely forgiving to the more emotional bot, which hung its head and apologized for its mistakes. They even lied to it about its performance, saying they didn't want to "hurt its feelings."
When our belongings increasingly sound and act like our peers, we're more likely to trust them with personal information, too. So, as the panel points out, it will be up to designers to modulate that relationship—deciding what constitutes a manipulative or over-eager interface versus a simply friendly one. "At a basic level lies the question," they write. "Will humans continue to enjoy the prospect of solitude in a world permeated by apparently social agents 'living' in our houses, cars, offices, hospital rooms, and phones?"
The report's most troubling warning is about our own human flaws: AI can be incredibly biased, often in ways its creators don’t even understand. "This threatens to deepen existing social biases, and concentrate AI’s benefits unequally among different subgroups of society," the report warns. That could range from voice recognition systems that can’t understand people with accents, to credit approval software that’s biased against certain neighborhoods or races.
One recent example of bias in AI, pointed out by Kate Crawford in an op-ed titled "Artificial Intelligence’s White Guy Problem," is an AI system for predicting recidivism in prisoners. It was "twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk," Crawford writes.
AI could easily inherit the systematic racism, sexism, and ageism that plague our society today, and it’ll be up to the creators of these AI systems to engineer out these biases. The best way to do that, ironically, is the same as developing any other product: "with careful design, testing, and deployment."