advertisement
advertisement
  • 2 minute Read

Twitter Battles Bullies With A Brilliant Dark Pattern

In Twitter time out, no one can hear you scream.

Twitter Battles Bullies With A Brilliant Dark Pattern

The way people speak to one another online is atrocious. From YouTube comments to Reddit threads, it takes but seconds to spot racism, misogyny, or outright harassment. But this behavior is, perhaps, at its worst on Twitter–because on Twitter, any public account can be targeted, leading to pile-ons filled with abusive words used by bully movements like Gamergate.

advertisement

Now, Twitter has begun rolling out a clever counter-tactic to such bad behavior. Spotted by BuzzFeed, a new feature quarantines any account that Twitter deems abusive. Specifically, these accounts are put into a “time out” by the service. The abuser can continue using Twitter as normal, reading and writing posts as normal. However, during this time out, no one can see the abuser’s posts but their own followers.

It’s like a tacit admission by Twitter: It can’t change what someone thinks, or even what someone says. But what it can do is make sure those words don’t reach any unassuming person who would be hurt by them.

It’s also a brilliant dark pattern. Dark patterns are instances where a user interface somehow tricks the user, usually for a company’s own gain. An email list you can’t find a way from which to unsubscribe? Or anytime you’ve seen a better deal on the same item halfway down the page on Amazon? Those are dark patterns.

In this case, Twitter does inform the user of their punishment. But then that user is essentially placed in a Truman Show version of the service, trapping them inside their own microcosmic bubble. And if they get the desire to go nuclear on someone else? Well, they can hit that big red button all they want. But it’s not connected to anything. It’s unclear exactly how someone gets put into a time out–and specifically what role user reports might play. Aa company spokesperson told BuzzFeed that “its teams look at an account’s behavior as opposed to simply language to determine if it’s being abusive,” meaning the system will go further than an algorithm that spots some bad words.

Interestingly enough, Google’s Daydream Labs has actually taken a similar approach to anti-bullying behavior in its early VR testing. For instance, take two people playing at a poker table in VR. One loses their temper, standing up and even trying to strike the other player. Rather than stop that tantrum by gluing the rager to their seat, Daydream’s approach is to let them stand and swing. Except, of course, that the other player never sees it. To them, the angry bully never left their chair.

Now, I’m not certain these dark patterns have no consequences of their own. One potential hole in the idea is that someone in time out can still direct their followers to harass you in their stead. Or, imagine someone threatening your life on Twitter, but because they’d been put into time out, you didn’t even know. Wouldn’t you prefer that stress, if it could help you ensure your own safety? I’m guessing most of us would. That said, hate speech has been found to cause measurable clinical trauma–akin to rape, burglary, or assault. And there’s no reason we should be putting our own well-being on the line every time we sign onto a social network.

About the author

Mark Wilson is a senior writer at Fast Company. He started Philanthroper.com, a simple way to give back every day.

More

Video