This story was adapted from the original on Signal v. Noise.
Recently I took a monthlong sabbatical from my job as a designer at Basecamp. (Basecamp is an incredible company that gives us a paid month off every three years.) When you take 30 days away from work, you have a lot of time and headspace that’s normally used up. Inevitably you start to reflect on your life.
And so, I pondered what the hell I’m doing with mine. What does it mean to be a software designer in 2018, compared to when I first began my weird career in the early 2000s?
The answer is weighing on me.
As software continues to invade our lives in surreptitious ways, the social and ethical implications are increasingly significant. Our work is heavy and it’s getting heavier all the time. I think a lot of designers haven’t deeply considered this, and they don’t appreciate the real-life effects of the work they’re doing.
Here’s a little example. About 10 years ago, Twitter looked like so:
How cute was that? If you weren’t paying attention back then, Twitter was kind of a joke. It was a silly viral app where people wrote about their dog or their ham sandwich.
Today, things are a wee bit different. Twitter is now the megaphone for the leader of the free world, who uses it to broadcast his every whim. It’s also the world’s best source for real-time news, and it’s full of terrible abuse problems.
That’s a massive sea change! And it all happened in only 10 years. Do you think the creators of that little 2007 status-sharing concept had any clue this is where they’d end up, just a decade later? Seems like they didn’t. Here’s an excerpt from Newsweek’s 2007 story about the fledgling platform:
People can’t decide whether Twitter is the next YouTube, or the digital equivalent of a hula hoop. To those who think it’s frivolous, Evan Williams responds: “Whoever said that things have to be useful?”
That’s not what it originally built. It grew into a Frankenstein’s monster, and now it’s not quite sure how to handle it.
I’m not picking on Twitter in particular, but its trajectory illustrates a systemic problem. Designers and programmers are great at inventing software. We obsess over every aspect of that process: the tech we use, our methodology, the way it looks, and how it performs.
Unfortunately we’re not nearly as obsessed with what happens after that, when people integrate our products into the real world. They use our stuff and it takes on a life of its own. Then we move on to making the next thing. We’re builders, not sociologists.
This approach wasn’t a problem when apps were mostly isolated tools people used to manage spreadsheets or send emails. Small products with small impacts. But now most software is so much more than that. It listens to us. It goes everywhere we go. It tracks everything we do. It has our fingerprints. Our heart rate. Our money. Our location. Our face. It’s the primary way we communicate our thoughts and feelings to our friends and family. It’s deeply personal and ingrained into every aspect of our lives. It commands our gaze more and more every day.
We’ve rapidly ceded an enormous amount of trust to software, under the hazy guise of forward progress and personal convenience. And since software is constantly evolving—one small point release at a time—each new breach of trust or privacy feels relatively small and easy to justify.
Oh, they’ll just have my location.
Oh, they’ll just have my identity.
Oh, they’ll just have an always-on microphone in the room.
Most software products are owned and operated by corporations, whose business interests often contradict their users’ interests. Even small, harmless-looking apps might be harvesting data about you and selling it. And that’s not even counting the army of machine learning bots that will soon be unleashed to make decisions for us. It all sounds like an Orwellian dystopia when you write it out like this, but this is not fiction. It’s the real truth.
See what I mean by heavy? Is this what we signed up for, when we embarked on a career in tech?
Fifteen years ago, it was a slightly different story. The internet was a nascent and bizarre wild west, and it had an egalitarian vibe. It was exciting and aspirational — you’d get paid to make cool things in a fast-moving industry, paired with the hippie notion that design can change the world. Well, that motto was right on the money. There’s just one part we forgot: Change can have a dark side, too.
If you’re a designer, ask yourself this question: Is your work helpful or harmful?
You might have optimistically deluded yourself into believing it’s always helpful because you’re a nice person, and design is a noble-seeming endeavor, and you have good intentions. But let’s be brutally honest for a minute.
If you’re designing sticky features that are meant to maximize the time people spend using your product instead of doing something else in their life, is that helpful?
If you’re trying to desperately inflate the number of people on your platform so you can report corporate growth to your shareholders, is that helpful?
If your business model depends on using dark patterns or deceptive marketing to con users into clicking on advertising, is that helpful?
If you’re trying to replace meaningful human culture with automated tech, is that helpful?
If your business collects and sells personal data about people, is that helpful?
If your company is striving to dominate an industry by any means necessary, is that helpful?
If you do those things . . . are you even a designer at all? Or are you a glorified huckster—a puffed-up propaganda artist with a fancy job title in an open-plan office?
Whether we choose to recognize it or not, designers have both the authority and the responsibility to prevent our products from becoming needlessly invasive, addictive, dishonest, or harmful. We can continue to pretend this is someone else’s job, but it’s not. It’s our job. We’re the first line of defense to protect people’s privacy, safety, and sanity. In many, many cases we’re failing at that right now.