deborah: The management regrets that it was unable to find a Gnomic Utterance that was suitably irrelevant. (gnomic)
deborah ([personal profile] deborah) wrote2018-12-05 12:44 pm

Oh brave new internet, / that has such Nazis and MRAs in't!

Every time some other social media site does something to drive away a large segment of its users, there's often an influx to Dreamwidth. This varies based on the userbase which is ticked off, of course; it tends to be the more female and more fannish users who are driven to Dreamwidth, while white supremacists and MRAs driven from reddit or Twitter are more likely to end up on sites such as 8chan or Gab.

This has two side effects: Dreamwidth users being excited that our platform is getting love and activity on a level that's been more rare since the great Tumblr + Twitter exodus of several years back, and new Dreamwidth users (and returnees) asking for some of the features which they loved at their old social media site. I am absolutely a fan of new people coming to Dreamwidth, and I undoubtedly agree with everyone that the UI is showing its age. It was not built in a mobile-first, multimedia-above-all world.

But it also true that the Internet is a more toxic place than it was in the heady days of Brad's garage in 1999, or in Mark and Denise's inspired 2008. Which leads me to the two hot takes I've been mulling over for several years:

  1. Some of the features people want are products of the Toxic Internet, which has trained people to expect them
  2. Dreamwidth's relative unpopularity is what keeps it great

Addiction and Anti-patterns


By now, it's become common knowledge that the designers at platforms such as Facebook, Intagram, Snapchat, Tumblr, and Twitter specifically designed user interfaces to offer intermittent rewards, triggering the same neurological signals which lead to addiction. Many of the designers of these technologies have begun speaking up about their dangers. Sandy Parakilas, formerly manager of privacy issues and policy compliance:
One of the core things that is going on is that they have incentives to get people to use their service as much as they possibly can, so that has driven them to create a product that is built to be addictive. Facebook is a fundamentally addictive product that is designed to capture as much of your attention as possible without any regard for the consequences. New York Magazine, April 2018


One of the worst offenders, borne out by academic research, is Like buttons -- or reblogs, retweets, and anything else which can allow a user to see the reward of popularity. (Another of the worst is the pull-down-to-refresh action, an explicit anti-pattern, but which offers the exact variable reward pattern that hacks our psychology.) These user interface widgets are the levers in a Skinner box, offering both the dopamine kick of pulling a lever (clicking a Like button) to one user, and the unexpected reward of receiving a Like to another.

These techniques aren't designed to increase your happiness, or the utility of the product to you. They are designed to do one thing: maximize engagement. Engagement keeps you on the site, whether you want to be there or not. Engagement helps the company keep harvesting data about you, so they can sell it to advertisers, data warehouses, and Cambridge Analytica. Engagement is why so many social media platforms prioritize the most rage-inducing and conspiratorial content: Infowars, Flat Earthers, and YouTubers performing stunts that get them killed. It's how the troll farm, the Internet Research Agency, tried to swamp Tumblr with enraging left-wing and right-wing memes. Engagement isn't there for your benefit.
Instagram is addictive, for example, because some photos attract many likes, while others fall short. Users chase the next big hit of likes by posting one photo after another, and return to the site regularly to support their friends. ... It’s hard to exaggerate how much the “like” button changed the psychology of Facebook use. What had begun as a passive way to track your friends’ lives was now deeply interactive, and with exactly the sort of unpredictable feedback that motivated Zeiler’s pigeons. Users were gambling every time they shared a photo, web link, or status update. A post with zero likes wasn’t just privately painful, but also a kind of public condemnation: either you didn’t have enough online friends, or, worse still, your online friends weren’t impressed. Adam Alter, Irresistible: The Rise of Addictive Technology and the Business of Keeping Us Hooked, 2017

We've been trained by these huge and profitable data collection machines that it's valuable to show our engagement with the trivial interaction of clicking on a Like button. It's the reward of the animated 💖. Yet it hurts us to give these, and it hurts us to get them, and it even hurts us to not know if we're going to get them. They're not true interaction; they're brain hacking. And they encourage us to use our tools in ways which are counter-productive.

In defense of unpopularity

Technological anti-patterns that reward you for an obsession with the site are only part of the problem with the modern Internet. The other is that widely-used platforms make the Internet too small.

Technology's great promise is how much easier it makes things. Walking from Berlin to Lyon takes 227 hours. Invent a bicycle and you can do it in 68, invent a car and you can do it in 12.5, invent airplanes and it's under two hours. How wonderful! When my grandparents left Europe they knew they'd probably never see their relatives again (because of the difficulty of travel, not because of the imminent murder spree that engulfed Europe shortly afterward). Meanwhile I like in a transatlantic family less than a century later, where we see each other on an annual basis at the very least.

But technology's great price is how much easier it makes things. Anyone who has ever played a technology-focused strategy game such as Civilization knows how much more dangerous the game gets if your opponent develops fast travel technologies before you. For that matter, anyone who has studied history -- not least the murder spree that engulfed Europe shortly after my grandparents left! -- knows that shortening the trip from Berlin to Lyon was a double-edged sword.

When technology reduces friction, it reduces friction for all purposes, the good and the ill. It used to be that someone who wanted to harass you needed to find your address in the print telephone book and go to your home, or get on the phone and take the time to call you repeatedly. It happened, but it was constrained by the resource cost to the harasser. Now the same technology that lets me have active friendships on the other side of the world, also exists for harassers, creeps, and exploiters. The same technology that lets women who've been subject to harassment share secret lists of missing stairs lets conspiracy theorists spread fake viral pedophilia scares that led to an entire destroyed village in India. This is no small problem: paid trolls almost certainly worsened the ongoing genocide in Myanmar, and likely got Jair Bolosonaro elected in Brazil. The old "don't feed the trolls and they'll go away" adage hasn't been true for years. And it's become abundantly clear that even those of us who thought we had the sophistication to recognize bad actors and fake content actually don't. (This isn't even getting into all the other costs of lowered friction for bad actors, such as harassment campaigns as a form of silencing: denial of service through induced fear.)

The reality is that there's a cost-benefit analysis in trying to do harm, and bad actors want to spread the harm widely and cheaply. It's logical that Facebook, Twitter, and YouTube are prime targets for spreading misinformation, conspiracies, and lies. But I admit to being shocked when Tumblr was targeted as well. Tumblr, never at the scale of a mega-platform, was nonetheless popular enough to make it cost-effective to troll. It turns our that the reduced friction of modern information technology makes fake news and discrediting journalism ludicrously cheap.

If Tumblr was big enough to be target by the Internet Research Agency, then how much larger would Dreamwidth have to get to become a logical target itself? Yes, we are partially protected by our content model. Dreamwidth prioritizes producing content, not sharing/reblogging/retweeting someone else's. This increase in the friction increases the cost of a harassment or misinformation campaign, and makes us a less desirable target. But not a nondesirable target! It's expensive to make YouTube videos, but plenty of malicious people do, because the potential reach is so great.

Dreamwidth, historically, has been an extremely pro-free-speech platform. Individuals or content can be banned for violating the Terms of Service, but as a general principle, it takes spam or a violation of US law to get banned. The theories behind this are solid. [staff profile] denise, one of the site's founders, had the experience of leading LiveJournal's support team, and watching how any platform that attempted to police content found itself in a rabbit hole of impossible-to-adjudicate edge cases.

If we've learned anything over the past decade, it's that any general principle can be gamed by sufficiently motivated bad actors. Don't have a harrassment policy? Let's troll this mouthy woman until she leaves the internet. Have a policy which allows reporting of harassers? Buy a few thousand fake accounts for pennies and have them report that mouthy woman until she gets banned. Employ trained content moderators? Flood the site with questionable content so the moderators can't keep up with it in a cost-effective way. Program an algorithm to do content moderation? Learn how to trick the algorithm, which will then spend all its time being fooled, because algorithms are flawed programs fed flawed datasets and programmed by flawed humans.

And that's even ignoring the cost of giving Nazis, white supremacists, MRAs, and the like a place to congregate and meet. Would Robert Bowers have murdered 11 Jews in Pittsburgh without Gab? Would Alek Minassian have killed 10 people in Toronto without incel communities? Dylann Roof, prior to murdering nine black churchgoers, was radicalized when he found a hate site as the result of a Google search. Of course hate crimes existed before social media. Mark Lepine murdered 14 women without, it seems, many social connections at all. But a free-speech-at-any-price police is a lot easier to hold when the price of speech is relatively low. And right now, the little-known status of Dreamwidth is what keeps that sustainable.

I don't know what the solution is, writ large. I haven't seen any reasonable and scalable solutions, except decreasing connectivity. The Facebook mantra, that more connectivity is always better, is clearly fatally flawed. But what's the alternative? A series of small silos, walled gardens from which it's difficult to communicate? Is there a way to have a widely popular platform that's not a tool in the hands of assholes?

Until we come up with a better answer, that's what I choose. A small platform, used by a community that's large enough for me to form friendships but not large enough to attract trouble.
sanguinity: woodcut by M.C. Escher, "Snakes" (Default)

[personal profile] sanguinity 2018-12-06 04:18 am (UTC)(link)
Not only do you have to create content to get the reward/feedback, but the act of feedback is more interactive, conversational. You have to compose words and think about what you want to say.

So, yeah, I obsessively check email for comments after a post goes up? But I'm hella less of a zoned-out click-machine on Dreamwidth than I am on tumblr, both as a poster and a reader.