Current Issue
This Month's Print Issue

Follow Fast Company

We’ll come to you.

3 minute read

AI's Biggest Danger Is So Subtle, You Might Not Even Notice It

What happens when artificial intelligence that helps us make decisions is so commonplace, we become dependent on it?

Illustration: Ailadi for argodesign

The rise of artificial intelligence has been met with reactionary fears of robots taking over. We’ve all seen the movies. What’s left out of this conversation is a more practical threat. We should be concerned that AI will be hijacked, not by rogue computers out to destroy mankind, but by people with ulterior motives.

A basic form of AI already here is called decision support. It helps us make decisions based on our behavior: Recommendation engines suggest just the right items for us to buy, and navigation systems tell us the best way to drive home. As AI advances, it will embed itself even deeper into our social fabric, shaping everything from how we do business to how we receive medical care.

So what happens when AI-powered assistance is so commonplace that we become dependent on it?

Fear of Deciding Alone, aka FODA. When deeply quantified support is readily at hand, we may grow to doubt many of the decisions we make without support. There is an apt parallel in FOMO (fear of missing out), a silly meme with serious underpinnings: Social media has warped our human instinct for recognition from our peers, creating a landscape in which we present the best versions of ourselves. Life looks like one big party, and if we don’t keep up, we miss all the fun. FODA is borne from the same human desire, only in this case we look to machines, not each other, for validation.

Our growing dependence on decision support is where artificial intelligence is most immediately dangerous. Behind every computer algorithm is a programmer. And behind that programmer is a strategy set by people with business and political motives. It would be easy enough for the people who design AI systems, motivated by greed, self-interest, or politics, to train computers to manipulate our lives in subtle and insidious ways, essentially lying to us through the algorithms that guide our thinking. And because we are so terrified of making our own decisions, we go along with it. The coming tidal wave of decision support threatens to give very few people a phenomenal amount of suggestive power over a great many people—the kind of power that is hard to trace and almost impossible to stop.

This is the butterfly effect, wherein tiny differences in the world can cascade into massive changes over time. In the case of artificial intelligence, it plays out through subtly corrupted software algorithms. For example, a computer programmer can make the smallest tweak to a search algorithm to direct people to one type of content over others. A subtle, undetectable change in one system can alter the outcome for billions of people. Such power is priceless to a motivated politician or business. And it is the most pressing, worrisome challenge we face as we move toward a world in which computers make more and more decisions for us.

Weirder still, the technology may pit us against our own human nature: With our decisions increasingly based on probability and statistics, how can we be anything more than normal, more than an average? What happens to risk, or the humanistic notion of what is true, when everything is based on everyone else? As humans, we are imperfect social creatures. We misread faces, misunderstand emotions, and display emotions (Jealousy! Anger! Envy!) that are deeply antisocial. The chinks in our perspective, the mistakes we make, and the happy accidents are what build up our armor. When our digitized advisors aggregate, average, and assuage, are we even autonomous beings anymore?

Decision support is becoming infrastructure. In the same way that roads have paved the way for cars instead of horses and buggies, decision support will deliver the next level of medicine, retail, way-finding, and more.

There is hope. This form of artificial intelligence doesn’t have to be something we fear. Our world is full of situations in which we react with our most animalistic instincts. Political positions, financial decisions, attitudes toward social justice—our biggest decisions are often fueled by poor logic and misinformation. In the best circumstances, artificial intelligence could save us from ourselves, by helping us understand each other, see the world more clearly, and collectively make better decisions. But we will have to be very careful. And the onus will be, in part, on designers to develop human-centered solutions that resist corruption. If we care about the world we live in, we should think long and hard about the interfaces, rules, and policies that will govern artificial intelligence and our new way of life.