Should Smart Gadgets Bully Us Into Making Better Choices?

Next-gen devices have the potential to help us live better. Should they force us to?

Setting aside, for the moment, whatever concerns you might have about being too connected, we can probably agree that smartphones, the devices that let us effortlessly snap photos and look up directions and check email on the go, are good things, generally speaking. They unlock new potential and multiply possibilities in all sorts of different scenarios.

But what if those phones automatically locked us out for safety reasons when they sensed we were driving our cars? And what if those cars were smart machines that automatically chauffeured us along the most fuel-efficient routes to our destinations? Would we still consider those devices smart? Or just scary?

That’s the issue Evgeny Morozov grapples with over at the Wall Street Journal. Looking at a slew of prototypes and proposals for next-generation products, Morozov identifies an interesting trend. Where many gadgets hope to harness cheap sensors and ubiquitous connectivity to help users make better choices about their lives, a subset of those devices seem to be actively pushing their users toward those choices. The piece begins with the example of a prototype trash can that posts photos of its contents to Facebook, giving users a social incentive not to dump recyclables in with the waste. Smart? Scary? Morozov thinks the latter.

The most worrisome smart-technology projects start from the assumption that designers know precisely how we should behave, so the only problem is finding the right incentive. A truly smart trash bin, by contrast, would make us reflect on our recycling habits and contribute to conscious deliberation--say, by letting us benchmark our usual recycling behavior against other people in our demographic, instead of trying to shame us with point deductions and peer pressure.

There are many contexts in which smart technologies are unambiguously useful and even lifesaving. Smart belts that monitor the balance of the elderly and smart carpets that detect falls seem to fall in this category. The problem with many smart technologies is that their designers, in the quest to root out the imperfections of the human condition, seldom stop to ask how much frustration, failure and regret is required for happiness and achievement to retain any meaning.

Morozov insists that we need to start differentiating between "good" smart technologies--devices that give us more choices, or give us information that helps us make better choices on our own--and the "bad" ones that do all the decision making for us. At its most nefarious, he says, such designs amount to "social engineering disguised as product engineering."

It’s a compelling question, and one designers will have to work through as our lives and the products in them become even more connected in years to come. There are two obvious extremes to avoid, a sort of Scylla and Charybdis of feedback, where you don’t want to reward users at every turn throughout their day, but you don’t want to shame them for every misstep, either.

And yet, while sailing right down the middle with every product might be a safe way forward, that’s all it will ever be. Smart products across the board truly do have the potential to change our lives--and the world--for the better. But there are as many different ways to be "smart" as there are products to enhance, and coming up with user experiences that make sense will require a good deal of thought on the part of those making them.

Some of these designs will indeed benefit from a good deal of gamification--if working out were more like a video game, for example, maybe we’d see healthier youngsters. And while the dangers of gadgets that shame us, or bully us, or otherwise dictate our behavior are obvious, perhaps, where our worst excesses are involved, some designs could benefit from implementing a gentle bit of peer pressure. Ruling out either model entirely, considering the splendid diversity of problems out there to be solved? That seems dumb.

Read Morozov’s article in full over at the WSJ.

[Image: Robot via Shutterstock]

Add New Comment

3 Comments

  • Raji Muthukumar

    Well, choice is the biggest of the humans rights on what they feel is good for them or bad for them. When being forced, many feel stressed, feel lack of authority on their own life and rebel the good knowingly many times. so forcing might be a bad idea. But there can be smart objects giving us suggestions on our options of what might work the best what might not work the best for different scenarios. So the humans will have the power to decide for themselves. 

  • Lucas M

    oh. usually we as humans dont know what we would prefer. if designers are able to employ research to make us better people we should be happy they did. a common misconception is that our primary obligation is to ourselves. to be free in this sense. rather, each person should be compelled to behave in the best way possible for both themselves and the group. goals that are often closer than the conservative right would have us believe.